Ali Emami

Ali Emami

Assistant Professor

Emory University, USA

I’m an Assistant Professor in the Department of Computer Science at Emory University. My research focuses on natural language processing and machine learning, with an interest in evaluating and interpreting large language models. I investigate how these models reason, generalize, and reflect patterns in language and society, across both predictable and surprising behaviors.

🎓 Prospective PhD Students 🎓 I am recruiting fully funded PhD students for the Fall 2026 cohort (applications due in the Fall 2025 cycle)! Please see the group page for more details.

Interests
  • Natural Language Processing
  • Machine Learning
  • Ethics, Bias, and Fairness
  • Natural Language Understanding
  • AI Interpretability and Reliability
  • Computational Social Science
Education
  • PhD in Computer Science, 2021

    McGill University/Mila, Canada

  • MSc in Computer Science, 2016

    McGill University, Canada

  • BSc in Joint Physics & Computer Science, 2014

    McGill University, Canada

Recent News


2025:

2024:

Recent Publications


We Politely Insist: Your LLM Must Learn the Persian Art of Taarof
EMNLP 2025

Cite Arxiv Code Dataset


Personality Matters: User Traits Predict LLM Preferences in Multi-Turn Collaborative Tasks
EMNLP 2025

Cite Arxiv


The World According to LLMs: How Geographic Origin Influences LLMs Entity Deduction Capabilities
EMNLP 2025 Findings

Cite Arxiv Code Dataset


The World According to LLMs: How Geographic Origin Influences LLMs Entity Deduction Capabilities
COLM 2025

Cite Arxiv Code + Website Dataset


Join Our Research Group at Emory

I am recruiting fully funded PhD students for Fall 2026 (applications due December 2025), as well as MS students interested in thesis research. I’m building a new interdisciplinary research group at the intersection of Natural Language Processing, AI Fairness, and Human-Centered AI.

Research Focus

We develop methods to make AI systems more fair, robust, and reliable while understanding their limitations and societal impacts. Current research directions include:

  • Reasoning & Evaluation: Creating challenging benchmarks that push AI reasoning capabilities and reveal failure modes

  • AI Safety: Developing methods to identify and mitigate harmful behaviors in language models, with focus on culturally-aware and context-dependent definitions of harm

  • Interpretability: Opening the black box of AI systems to understand their decision-making and build trust

  • Applied AI for Good: Leveraging AI for inclusive education, diverse storytelling, and equitable technology

Contact