Inspired by the recent UNESCO report I'd Blush if I Could, we tackle some of the issues regarding gendered AI through exploring the impact of feminist social robot behaviour on human-robot interaction. Specifically we consider (i) use of a social robot to encourage girls to consider studying robotics (and expression of feminist sentiment in this context), (ii) if/how robots should respond to abusive, and antifeminist sentiment and (iii) how ('female') robots can be designed to challenge current gender-based norms of expected behaviour. We demonstrate that whilst there are complex interactions between robot, user and observer gender, we were able to increase girls' perceptions of robot credibility and reduce gender bias in boys. We suggest our work provides positive evidence for going against current digital assistant/traditional human gender-based norms, and the future role robots might have in reducing our gender biases.
Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs.
For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.