Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.
Trust and trustworthiness judgments have been studied in the context of social, business, and romantic relationships as well as between humans and automation. This article extends the prior research to explore how programmers assess code for trustworthiness when asked to reuse existing computer code. We used cognitive task analysis methods to explore experienced programmers' first-person perspectives on code reuse. We elicited specific cues and strategies used to assess trustworthiness in real-world scenarios. Using qualitative analysis techniques, we grouped cues into three trustworthiness factors: performance, transparency, and reputation. We also identified environmental factors that influence acceptable levels of trust, including customer needs and requirements, organizational resources and constraints, and consequences of failure. We propose a descriptive model based on these findings. These findings have important implications for organizations that intend to reuse, adapt, and extend code over time. Writing code with the factors such as reputation, transparency, and performance in mind will increase the likelihood that it will be trusted in the near term and be reusable in the future. Furthermore, this research provides an important foundation for future studies exploring trusting behaviors, individual differences, and the ability to detect malicious code.
In two studies, lesbians, gay men and bisexuals were queried concerning mistakes that well-meaning heterosexual people have made when interacting with them. In qualitative, open-ended research, we determined that the most common mistakes concerned heterosexuals' pointing out that they know someone who is gay, emphasizing their lack of prejudice, and relying on stereotypes about gays. Following up with a quantitative, close-ended questionnaire, we determined that the mistakes respondents experienced most often involved heterosexuals (a) relying on stereotypes and (b) ignoring gay issues; the most annoying mistakes were heterosexuals (a) using subtle prejudicial language and (b) not owning up to their discomfort with gay issues. We used two theoretical perspectives, shared reality theory and the contact hypothesis, to analyze the quantitative responses. Implications for intergroup relationships between heterosexual people and gay people are discussed.
We measured color-breakup thresholds for a simple field-sequential color stimulus while varying its luminance, contrast, and retinal velocity. Data analysis yielded an equation that predicts whether color breakup will be visible for specified viewing conditions. We compare this equation with an earlier version and discuss its uses and limitations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.