1

We Politely Insist: Your LLM Must Learn the Persian Art of Taarof

Large language models (LLMs) struggle to navigate culturally specific communication norms, limiting their effectiveness in global contexts. We focus on Persian taarof, a social norm in Iranian interactions, which is a sophisticated system of ritual …

Personality Matters: User Traits Predict LLM Preferences in Multi-Turn Collaborative Tasks

As Large Language Models (LLMs) increasingly integrate into everyday workflows, where users shape outcomes through multi-turn collaboration, a critical question emerges: do users with different personality traits systematically prefer certain LLMs …

Beyond Content: How Grammatical Gender Shapes Visual Representation in Text-to-Image Models

Research on bias in Text-to-Image (T2I) models has primarily focused on demographic representation and stereotypical attributes, overlooking a fundamental question: how does grammatical gender influence visual representation across languages? We …

The World According to LLMs: How Geographic Origin Influences LLMs Entity Deduction Capabilities

Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger …

Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations

Addressing gender bias and maintaining logical coherence in machine translation remains challenging, particularly when translating between natural gender languages, like English, and genderless languages, such as Persian, Indonesian, and Finnish. We …

Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through Books

Books, while often rich in cultural insights, can also mirror societal biases of their eras - biases that Large Language Models (LLMs) may learn and perpetuate during training. We introduce a novel method to trace and quantify these biases using …

Can We Afford The Perfect Prompt? Balancing Cost and Accuracy with the Economical Prompting Index

As prompt engineering research rapidly evolves, evaluations beyond accuracy are crucial for developing cost-effective techniques. We present the Economical Prompting Index (EPI), a novel metric that combines accuracy scores with token consumption, …

NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers

Large Language Models (LLMs) have shown impressive performance on various benchmarks, yet their ability to engage in deliberate reasoning remains questionable. We present NYT-Connections, a collection of 358 simple word classification puzzles derived …

MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models

This study explores the effectiveness of Large Language Models (LLMs) in creating personalized "mirror stories" that reflect and resonate with individual readers' identities, addressing the significant lack of diversity in literature. We present …

STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions

Mitigating explicit and implicit biases in Large Language Models (LLMs) has become a critical focus in the field of natural language processing. However, many current methodologies evaluate scenarios in isolation, without considering the broader …