
Empowering Diversity and Confronting Bias in AI (A Virtual Facilitated Discussion)
Wednesday, September 17, 2025 (12:15 PM - 1:15 PM) (EDT)
Description
Join us for a thought-provoking virtual read-and-discuss TechWork session on Wednesday, September 17, 2025, from 12:15 PM to 1:15 PM, as we delve into the critical topic of empowering diversity and confronting bias in AI. This session invites participants to reflect on the intersections of technology, equity, and responsibility. After reading the articles in advance, we’ll come together for a facilitated discussion led by three moderators from Penn State University Park, exploring key takeaways, personal reactions, and what it means to build inclusive AI for the future. Come ready to share, listen, and challenge ideas—with respect and curiosity.
This interactive session will be guided and facilitated by:
• Dr. Kelley Cotter, Assistant Professor, The Pennsylvania State University
• Ankolika De, PhD Candidate, The Pennsylvania State University
• Benji Davis, Informatics PhD Student, The Pennsylvania State University
Session Pre-Work (Approximately 20 Minutes): Attendees are asked to please complete the following pre-work before attending:
• Read Article (10 minutes): Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes Link to article: https://hai.stanford.edu/news/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes This article gives a brief overview of a paper published by researchers at Stanford University who found that major LLM models covertly reproduce racial stereotypes especially against African Americans. It is quite short and could be a good example of what happens when diverse perspectives and training data are excluded from the development process.
• Read Article (10 minutes): Empower Diversity in AI Development Link to article: https://cacm.acm.org/opinion/empower-diversity-in-ai-development/ This is a great article from Communications of the ACM. It argues that while great strides have been made to technically improve diversity within AI models, we need to focus on the social side of development, i.e. the developers' implicit (or explicit) social biases. It then provides some practical examples of how organizations can improve diversity.
Images


