Profile of Yejin Choi and Her Contributions to AI

Profile of Yejin Choi: Architect of Human-Centered AI and Commonsense Reasoning

Introduction

In the contemporary landscape of artificial intelligence (AI), few figures have shaped the field with as much intellectual courage and impact as Yejin Choi. A South Korean-American computer scientist and a leading authority on natural language processing (NLP), commonsense reasoning, and multi-modal AI, Choi is currently the Dieter Schwarz Foundation Professor of Computer Science and Senior Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence (HAI)1. Recognized as one of the "100 Most Influential People in AI 2025" by TIME, her career spans academia, cutting-edge research institutes, leadership roles in industry, and a global influence as a public intellectual in AI ethics, safety, and societal impact2,3.

This report provides a comprehensive profile of Yejin Choi, structured around her early life and education, academic and professional trajectory, seminal research contributions, current roles and vision at Stanford, her influence within the AI community, and the honors she has accumulated-including the highly prestigious MacArthur Fellowship and her recent TIME100 recognition.

1. Early Life and Education

Childhood and Early Interests

Yejin Choi was born in 1977 in South Korea. From an early age, she exhibited a strong curiosity about science and technology. As recounted in multiple interviews, Choi was one of the few girls active in science competitions in her youth, an experience that nurtured both her love of intellectual risk and her awareness of societal biases in STEM fields4. Her formative experiences-competing in model airplane contests and navigating cultural expectations-shaped her research interests in social norms and their influence on AI.

Academic Foundations

Choi pursued her undergraduate studies at the prestigious Seoul National University, majoring in Computer Science and Engineering. This phase instilled in her a deep fascination for the intersection of language and computation-a theme that would define her career5.

After graduation, Choi moved to the United States to embark upon doctoral research at Cornell University, under the mentorship of Claire Cardie, a pioneering figure in NLP6. At Cornell, Choi developed rigorous foundations in machine learning, sentiment analysis, and opinion mining. Her dissertation, “Fine-grained opinion analysis: structure-aware approaches,” contributed new methods for identifying sources of opinions using conditional random fields and extraction patterns, laying the groundwork for contemporary sentiment analysis and opinion mining in NLP6.

2. Academic Appointments and Career Timeline

Career Development: From Stony Brook to Washington

Upon completing her Ph.D. in 2010, Choi started as an Assistant Professor of Computer Science at SUNY Stony Brook (2010-2014). Here, her research began to bridge theoretical NLP with pressing real-world challenges, such as detecting deceptive opinion spam in online reviews-a problem of growing societal concern7.

Choi’s transition to the Paul G. Allen School of Computer Science & Engineering at the University of Washington in 2014 marked a pivotal shift, propelling her into globally visible research on commonsense AI, vision-language modeling, and AI ethics. She held the endowed Brett Helsel Professorship and later served as the Wissner-Slivka Chair of Computer Science8.

Simultaneously, she became a senior director at the Allen Institute for Artificial Intelligence (AI2), leading the Mosaic project-a team dedicated to endowing machines with common sense and social norms. Her dual appointments at UW and AI2 allowed her to initiate landmark projects that blended symbolic reasoning, neural networks, and ethical AI9,10.

Recent Appointments and Stanford HAI

In 2025, Stanford University appointed Choi as the Dieter Schwarz Foundation HAI Professor and Senior Fellow at HAI, where she also serves as co-director1. This move positions her among other leading figures such as Fei-Fei Li at the forefront of human-centered, ethical AI research.

3. Current Roles and Responsibilities at Stanford

Stanford Leadership and HAI Mission

At Stanford, Yejin Choi holds three major positions:

  • Dieter Schwarz Foundation HAI Professor of Computer Science
  • Senior Fellow, Stanford Institute for Human-Centered Artificial Intelligence (HAI)
  • HAI Co-director (select research programs, 2025- )

Choi’s role at HAI is to advance interdisciplinary AI research that benefits humanity, aligns technology with societal values, and guides policy for responsible deployment of AI. She stresses the need for diverse viewpoints in AI safety and governance, and leads HAI’s efforts in fostering equity, fairness, and pluralistic alignment in AI systems1.

Her current research interests at Stanford include:

  • Fundamental limits and capabilities of large language models (LLMs)
  • Alternative training recipes and architectures for language models
  • Symbolic methods and neuro-symbolic integration for robust reasoning
  • AI safety, moral norms, and pluralistic alignment
  • Knowledge discovery and interpretability in AI
  • Efficient, small language models (SLMs) democratizing access to AI1

Choi’s leadership at Stanford is both practical and visionary. She is tasked with building bridges across academia, policy, industry, and society-ensuring AI research not only scales technically, but does so ethically, inclusively, and in alignment with human needs.

4. Research Focus and Contributions

A. Foundations in Natural Language Processing

Early Contributions

Choi’s academic work began with fine-grained opinion analysis, using machine learning (notably conditional random fields) to extract sentiment and authorship from complex text corpora. This foundational work set new standards for accuracy in sentiment analysis and defined research trajectories in news and social media credibility6.

Detecting Deceptive Opinion Spam

At Stony Brook, Choi led the development of advanced techniques for automatically detecting fake reviews on platforms like Yelp and Amazon-a task requiring subtle understanding of linguistic patterns and intention behind text11. Linguistic markers such as pronoun usage and sentence structure proved to be powerful signals for deception. This work has since become a cornerstone in the fight against online misinformation and fake content.

Sentiment Analysis and Contextual Reasoning

Moving to the University of Washington, Choi expanded her interests to include context-aware language modeling. She worked on models capable of understanding not just the surface meaning of words, but the intent, emotion, and subtext that define human communication. Choi’s research here fueled major advances in conversational AI and dialogue systems, pushing machines toward genuine comprehension of social nuance5.

B. Commonsense Reasoning and Knowledge

Challenge of Commonsense AI

One of Choi’s most enduring and influential areas has been commonsense reasoning-the ability of AI systems to make implicit inferences about the world, similar to how humans unconsciously understand cause and effect, intentions, or social norms. Despite being “trivial for humans but so hard for machines,” as Choi notes in her keynote talks, commonsense reasoning remains a grand challenge in AI12,13.

The ATOMIC and COMET Projects

ATOMIC: Developed at AI2 under Choi’s leadership, ATOMIC (Atlas of Machine Commonsense) is a massive, open commonsense knowledge graph that encodes millions of if-then causal relationships about everyday events, actions, and their social or physical consequences14.

COMET: Building on ATOMIC, COMET (Commonsense Transformers) is a neural reasoning engine that generates plausible, open-text inferences for unstructured scenarios. COMET leverages neural language models to output diverse assumptions, possible intents, and likely effects of events described in natural language.

This neuro-symbolic blend-using explicit, human-readable symbolic knowledge and powerful neural systems-enables machines to “think aloud” and predict human-like commonsense judgments in new situations13.

Vision Models and Multi-Modal AI

Recognizing that real-world cognition is inherently multi-modal, Choi pioneered multi-modal AI systems that combine visual and textual information. Her work on automatic image captioning and visual story sequencing allows AI to draw contextual meaning from both pictures and language-essential for tasks like robot perception, document analysis, and assistive technologies11.

Moral Reasoning and Ethical AI

Choi has also led research on moral and social norm reasoning, creating prototypes such as Ask Delphi-an AI designed to weigh in on ethical dilemmas. Delphi goes beyond rules-based ethical frameworks, instead learning moral norms and calibrating judgments for real-world complexity and cultural context11,15.

Pluralistic Alignment

A primary research thrust at Stanford HAI, Choi’s work on pluralistic alignment-or designing AI to reflect a diversity of human values rather than a singular “gold” standard-addresses the risk of homogenized, biased AI. This includes creating pluralistic benchmarks, methods, and value catalogs such as ValueGenome for aligning AI with varied human perspectives10.

C. Advances in Model Efficiency and Democratization

Choi is a prominent advocate of small language models (SLMs) as alternatives to energy-hungry “bigger is better” LLMs. SLMs are more accessible to researchers, ecologically sustainable, and help distribute AI capability more equitably-a recurring theme in her TED and public talks9.

D. Bias, Fairness, and Social Impact

She was among the first to highlight how AI models amplify existing biases-gender, race, class-hidden in language data. Collaborative studies with her students revealed power imbalances in film character descriptions and social narratives, guiding the community toward more responsible dataset creation and evaluation15.

E. Keynotes, Media Appearances, and Public Communication

Choi has become an influential voice on the global stage for AI safety, ethics, and education:

  • TED 2023 Main Stage Speaker: Her talk on the brittleness and societal impact of extreme-scale AI models reached millions, highlighting why common sense, safety, and diversity cannot be scaled by brute force alone16.
  • Keynote Speaker: She has delivered keynote addresses at all major AI conferences-ACL, CVPR, ICLR, MLSys, VLDB, WebConf, AAAI-directly impacting the discourse on AI research directions17.
  • Media and Advocacy: Frequently interviewed by TIME, NPR, major tech news outlets, and AI thought leader organizations, Choi’s perspectives help shape public and policy discussions on the future of AI3.

5. Summary Table: Notable Publications and Their Impact

Several of these publications have received Best or Outstanding Paper Awards at major conferences or Test-of-Time Awards, marking their sustained relevance.

Each publication above represents a significant advancement in its subfield:

  • Early sentiment/opinion analysis and deception detection laid groundwork for trustworthy AI on social platforms.
  • Multi-modal integration (text and vision) achieved breakthroughs in image captioning, story understanding, and cross-modal learning.
  • Commonsense reasoning (ATOMIC, COMET) established the benchmark structure of neuro-symbolic AI.
  • Model efficiency and neuro-symbolic distillation shaped today’s turn toward interpretable, sustainable, and accessible AI models.
  • Pluralistic and ethical AI frameworks (Delphi, pluralistic alignment) are now central to deployment at scale.

6. Recognitions, Awards, and Honors

International Accolades

Yejin Choi’s career is punctuated by an unusual array of landmark awards, recognizing her as both a pioneer and an enduring leader.

Major Awards and Honors

  • MacArthur Fellowship (“Genius Grant”, 2022): Recognizes individuals for creativity and potential for transformative impact-rare for computer scientists, especially for researchers in NLP15,18.
  • TIME's “100 Most Influential People in AI” (2023, 2025): Two-time honoree reflecting deep and public influence on AI’s trajectory and public understanding19,3.
  • AI2050 Senior Fellowship (2024): For AI research addressing “hard problems” in safety, control, and human alignment with AGI10.
  • IEEE AI’s “10 to Watch” (2016): Early-career international recognition for innovative AI contributions.
  • Borg Early Career Award (BECA, 2018): Recognizing outstanding contributions by early-career female computer scientists.
  • Inaugural Alexa Prize (2017): For innovation in conversational AI.

Best Paper and Outstanding Paper Awards

  • Association for Computational Linguistics (ACL) 2023: Best Paper ("Do Androids Laugh at Electric Sheep?")20.
  • ICML, NeurIPS, AAAI, CVPR: Multiple Outstanding Paper and Test-of-Time Awards, reflecting technical depth and field-wide impact1,13.
  • Longuet-Higgins Prize, CVPR 2021: For enduring contributions to computer vision.

Additional Honors

  • Main stage speaker: TED 2023, keynote addresses at every major AI conference
  • Mentor and educator awards: For graduate thesis guidance and AI education5.
  • Scientific Advisor: Kyutai (French foundational AI research group), reflecting her reach beyond the U.S.6.

7. Teaching, Mentorship, and Leadership

Educator and Mentor

Beyond her research, Choi is an influential educator, shaping the next generation of AI scientists. At University of Washington and Stanford, she teaches rigorous courses in NLP, knowledge, reasoning, and ethics in AI. Notably, her classes combine technical depth with practical experiments in responsible AI development, fairness, and interpretability5.

Choi’s mentorship style is described as collaborative, open, and risk-tolerant-encouraging students to take “adventurous routes” in research. Many former students have assumed leadership in academia and AI industry labs. She advocates for increasing diversity in AI and regularly mentors women and underrepresented minorities in computer science15.

Community and Collaborative Leadership

She often leads cross-institutional and international collaborations, spanning cognitive neuroscience (for understanding theory of mind), ethics (with moral philosophers), and machine learning communities. Her vision of “AI for social good” is advanced through interdisciplinary initiatives at Stanford HAI and in partnerships with institutes in the U.S. and Europe10.

8. Media Influence, Public Engagement, and Contemporary Impact

TIME100: Most Influential People in AI 2025

The 2025 TIME100 AI issue lists Yejin Choi as an influential "Thinker"-honoring not just her research output but her wide influence on the AI field’s ethical direction, scalability debates, and public discourse2,3. TIME details Choi's push for energy-efficient SLMs, emphasizing her critique of the “scale for scale’s sake” LLM race. She is praised for leading the conversation on democratizing AI access, pluralistic values, and aligning AI development with the public good.

TED, Keynotes, and Outreach

  • TED Talks: Her 2023 talk, "Why AI is incredibly smart and shockingly stupid," eloquently critiques the limits of scale-centric AI and speaks to millions globally about the importance of common sense, transparency, and sustainable AI16.
  • Mainstream media presence: She is regularly featured in NPR, GeekWire, academic podcasts, and technical commentary on responsible AI, influencing both policy and industry perspectives15.
  • Keynote and panel leadership: Choi’s presence at global policy gatherings, technical workshops, and ethics roundtables magnifies her influence far beyond the university setting20.

9. Research Vision: The Next Decade

Research Directions at Stanford HAI

  • Human-Centered AI: Focusing on how AI can augment, rather than replace, human intelligence and well-being.
  • Pluralistic Alignment: Ensuring AI systems respect diverse human backgrounds, moral perspectives, and group values-making AI trustworthy and inclusive.
  • Commonsense Robustness: Striving for models that can reason with “dark matter” of commonsense knowledge.
  • Algorithmic Efficiency: Building efficient, accessible language models that can be scrutinized and improved by the broader AI community, not just tech giants.
  • Ethical and Social Impact: Advancing research on fairness, accountability, and policy frameworks for AI deployment in sensitive domains.

10. Conclusion: Yejin Choi's Lasting Legacy

Yejin Choi’s journey from a curious schoolgirl in South Korea to a globally decorated AI scientist and Stanford leader embodies a unique blend of adventurous spirit, technical brilliance, and ethical foresight. Her research and advocacy have indelibly shaped the trajectory of NLP, vision-language reasoning, and commonsense AI, while her public voice has been instrumental in keeping the AI revolution accountable to social good, diversity, and robust human values.

Her current standing at Stanford HAI, as highlighted by top-tier recognitions like the MacArthur Fellowship and the TIME100 honor, signals a continuing leadership role in tackling some of the field’s hardest questions: Can we make AI not just smart, but wise? Not just powerful, but fair? And not just scalable, but humanistic?

In sum, Yejin Choi is not merely a technical innovator; she is an architect of AI’s ethical and human-centered future.

References