Four artificial intelligence (AI) image generators (Adobe Firefly, DALL•E2, Craiyon, and DreamStudio) were utilized to produce "faces" of chemists with varying occupational titles. Images were analyzed for representational biases and compared to data available from National Science Foundation (NSF). Presentational biases were analyzed within American Chemical Society (ACS) Diversity, Equity, Inclusion, and Respect guidelines with a particular focus on presentation of diversity and disability. Amplification of both representational and presentational biases was observed for all four AI generators despite alignment of demographic trends with NSF and ACS reporting. Influences of occupationally tied prompts (specific to chemistry) on demographic distributions of AI-generated images were investigated. At least one AI image generator assigned women and racial minorities to "assistant" positions while males and whites occupied the "top" positions in the field. Our data also demonstrates erasure of people with visible disabilities in the AI-generated outputs. Perspectives of current students in chemistry classes at Winona State University on "what a chemist looks like" have been collected and analyzed. The disturbing prevalence of a "white male" image of a chemist, even for students identifying as female or a person of color, was evidenced. This research brings forward concerns about the utilization of "black box" algorithms for training generative AI for nonbiased image generation using consumer feedback when novices themselves perpetuate both representational and presentational biases prior to their exposure to AI's perspectives, especially in chemistry where substantial NSF and ACS efforts have been forwarded to diversify and revitalize the next generation of scientists.