Research
Globe.AI faculty, students, postdocs, and partners collaborate on a diverse set of research projects aimed to better understand and design AI systems for people with diverse cultural backgrounds.
Ongoing Projects
Understanding and designing for culture in AI
Understanding whether current AI systems represent the values, norms, behaviors, and practices of specific cultures more than others is essential for designing future AI systems that can equally support people from diverse backgrounds.
We have conducted award-winning research showing how AI models and datasets commonly align more with people who are Western, Educated, Industrialized, Rich, and Democratic (so-called WEIRD people).
For example, toxicity detection models mainly reflect the annotations of WEIRD people, GPT-3’s output is primarily aligned with White US Americans’ values, and AI decision-support systems are primarily designed for people with a decision-making style most common in Western countries.
Some of the questions that arise from this work are:
How should AI-supported decision-making tools be designed to best serve people in various countries and cultures?
Can the use of AI systems trigger symptoms similar to culture shock?
And what happens if the AI mimics a person’s culture, dialect, and other parts of a person's cultural identity?
Teaching AI systems to learn cultural values
One of the great challenges of the future of human-AI interaction is the current inability of AI to incorporate diverse cultural values, norms, behaviors, and practices. Can an AI, just like a child, grow up in a certain culture, dynamically adopt cultural values, and apply them in new contexts? We have done pioneering research on teaching AI agents to acquire a culturally-attuned value system implicitly from humans using inverse reinforcement learning. This work showed that an AI agent learning from a particular cultural group can acquire the altruistic characteristics reflective of that group's behavior, and this learned value system can generalize to new scenarios requiring novel altruistic judgments.
Auditing of Problematic Socio-Cultural Behaviors in Large Language Models
How do covert harms surface in LLM-generated conversations in non-Western concepts like caste, compared to Western ones such as race? How do LLMs affect political culture? When and how are these biases introduced?
Studying AI ethically and at scale
How can we study the perception and use of AI systems, datasets, and models with culturally diverse people? How can we do so in an ethical way? In our work, we employ both the virtual lab LabintheWild to reach participants around the globe and work with local communities and AI practitioners to develop protocols for ethical studies of AI.
Understanding and Leveraging AI for educational purposes in various cultures
While AI has already greatly impacted educational practices around the world, there is a lack of guidance in how to use AI to maximize the benefits to teachers and students. In our work, we study how AI is being used in learning contexts across cultures, what the benefits are, and how we could mitigate the risks. Importantly, one of our goals is to develop guidelines for the use of AI in educational settings both online (such as in MOOCs) and offline (such as in traditional classrooms).
View Papers!