Control of competing parameters such as thermoelectric (TE) power and electrical and thermal conductivities is essential for the high performance of thermoelectric materials. Bulk-nanocomposite materials have shown a promising improvement in the TE performance due to poor thermal conductivity and charge carrier filtering by interfaces and grain boundaries. Consequently, it has become pressingly important to understand the formation mechanisms, stability of interfaces and grain boundaries along with subsequent effects on the physical properties. We report here the effects of the thermodynamic environment during spark plasma sintering (SPS) on the TE performance of bulk-nanocomposites of chemically synthesized Bi(2)Te(2.7)Se(0.3) nanoplatelets. Four pellets of nanoplatelets powder synthesized in the same batch have been made by SPS at different temperatures of 230, 250, 280, and 350 °C. The X-ray diffraction, transmission electron microscopy, thermoelectric, and thermal transport measurements illustrate that the pellet sintered at 250 °C shows a minimum grain growth and an optimal number of interfaces for efficient TE figure of merit, ZT∼0.55. For the high temperature (350 °C) pelletized nanoplatelet composites, the concurrent rise in electrical and thermal conductivities with a deleterious decrease in thermoelectric power have been observed, which results because of the grain growth and rearrangements of the interfaces and grain boundaries. Cross section electron microscopy investigations indeed show significant grain growth. Our study highlights an optimized temperature range for the pelletization of the nanoplatelet composites for TE applications. The results provide a subtle understanding of the grain growth mechanism and the filtering of low energy electrons and phonons with thermoelectric interfaces.
This paper contributes to the growing literature in empirical evaluation of explainable AI (XAI) methods by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Specifically, based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy-improve people's understanding of the AI model, help people recognize the model uncertainty, and support people's calibrated trust in the model. Through randomized controlled experiments, we evaluate whether four types of common model-agnostic explainable AI methods satisfy these properties on two types of decision making tasks where people perceive themselves as having different levels of domain expertise in (i.e., recidivism prediction and forest cover prediction). Our results show that the effects of AI explanations are largely different on decision making tasks where people have varying levels of domain expertise in, and many AI explanations do not satisfy any of the desirable properties for tasks that people have little domain expertise in. Further, for decision making tasks that people are more knowledgeable, feature contribution explanation is shown to satisfy more desiderata of AI explanations, while the explanation that is considered to resemble how human explain decisions (i.e., counterfactual explanation) does not seem to improve calibrated trust. We conclude by discussing the implications of our study for improving the design of XAI methods to better support human decision making.
CCS CONCEPTS• Human-centered computing → Empirical studies in HCI; • Computing methodologies → Machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.