I am recruiting fully funded PhD students for Fall 2026 (applications due December 2025), as well as MS students interested in thesis research. I’m building a new interdisciplinary research group at the intersection of Natural Language Processing (NLP), AI Safety, Human-Centered AI, and AI4Health.
Research Focus
We develop methods to make AI systems more trustworthy, reliable, and equitable, while understanding their limitations and societal impacts. Current research directions include:
Reasoning & Evaluation: Creating challenging benchmarks that push AI reasoning capabilities and reveal failure modes
AI Safety & Fairness: Developing methods to identify and mitigate harmful behaviors in language models, with a focus on culturally-aware and context-dependent definitions of harm
Reliability & Robustness: Stress-testing AI systems across diverse conditions and ensuring consistent performance in real-world settings
Interpretability: Opening the black box of AI systems to understand their decision-making and build human trust
Applied AI for Good: Leveraging AI for inclusive education, diverse storytelling, equitable healthcare, and socially beneficial applications
Current Team
Director
Ali Emami, Assistant Professor of Computer Science
Current Graduate Students
William Hao (Fall 2025 - Present) (Co-supervised with Professor Joyce Ho)
Strong candidates typically bring experience in these key areas:
🧠 Technical Foundation
Strong programming skills (Python preferred)
Experience with machine learning/deep learning frameworks
Comfort with statistical analysis and experimental design
📝 Research Skills
Clear technical writing and communication (this is a dying art!)
Critical thinking about AI systems and their limitations
Ability to read, implement, and build upon research papers
💡 Intellectual Curiosity
Interest in interdisciplinary problems at the intersection of AI and society
Enthusiasm for both building systems and understanding their impacts
Openness to collaborating with domain experts (linguistics, psychology, ethics)
Bonus points for: Prior NLP/LLM experience, mrain conference publications (e.g., ACL, EMNLP), user study experience, or demonstrated interest in AI ethics/fairness.
How to Apply & Connect
Reaching Out
Email me at aemami[at]emory.edu with subject line [Prospective PhD/MS Student - Fall 2026] (please make sure you do this, otherwise I won’t be considering the email!) including:
Your CV/resume with links to any publications or projects
2-3 sentences on why our research vision excites you
Any relevant experience or coursework in NLP, ML, or AI ethics
(Optional) A paper or project you’re proud of.
Pro tip: I really like an email that looks like it was authentically, organically, and succinctly written by solely you! I once again insist on this, after seeing dozens of emails come in!
PhD funding: Admitted students receive full support (stipend + tuition)
Timeline: Applications due October 2025 for Fall 2026 start
I cannot pre-admit, but I strongly advocate for excellent fits
For Current Emory Students
Undergraduate and MS students at Emory interested in research opportunities should email me with your interests, relevant coursework, and available time commitment (minimum 10 hrs/week).