In the search for efficiently phosphorescent materials, this article presents a rational design and theoretical comparative study of some photophysical properties in the (fpmb)(x)Ir(bptz)(3-x) (x = 1-2), which involve the usage of two 2-pyridyl triazolate (bptz) chromophores and a strong-field ligand fpmb (fpmb = 1-(4-difluorobenzyl)-3-methylbenzimidazolium). The first principle theoretical analysis under the framework of the time-dependent density functional theory approach is implemented in this article to investigate the electronic structures, absorption and phosphorescence spectra. It is intriguing to note that 1 and 2 exhibit theirs blue phosphorescent emissions with maxima at 504 and 516 nm, respectively. Furthermore, to obtain the mechanism of low phosphorescence yield in 1 and estimate the radiative rate constant k(r) for 2, we approximately measure the radiative rate constant k(r), the spin-orbital coupling (SOC) value, ΔE (S - T), and the square of the SOC matrix element (<Ψ(S1·)H(SO·)Ψ(T1)>(2)) for 1 and 2. Finally, we tentatively come to conclusion that the switch of the cyclometalated ligand from the main to ancillary chelate seems to lower the splitting ΔE (S - T) in the current system.
Many studies on automatic speech emotion recognition (SER) have been devoted to extracting meaningful emotional features for generating emotion-relevant representations. However, they generally ignore the complementary learning of static and dynamic features, leading to limited performances. In this paper, we propose a novel hierarchical network called HNSD that can efficiently integrate the static and dynamic features for SER. Specifically, the proposed HNSD framework consists of three different modules. To capture the discriminative features, an effective encoding module is firstly designed to simultaneously encode both static and dynamic features. By taking the obtained features as inputs, the Gated Multi-features Unit (GMU) is conducted to explicitly determine the emotional intermediate representations for framelevel features fusion, instead of directly fusing these acoustic features. In this way, the learned static and dynamic features can jointly and comprehensively generate the unified feature representations. Benefiting from a well-designed attention mechanism, the last classification module is applied to predict the emotional states at the utterance level. Extensive experiments on the IEMOCAP benchmark dataset demonstrate the superiority of our method in comparison with state-of-the-art baselines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.