Study Reveals AI Gives Better Results When You Are ‘Rude’
GameRevolutionTue, January 20, 2026 at 2:49 PM UTC
Add Yahoo as a preferred source to see more of our stories on Google.
Researchers at Pennsylvania State University reported that impolite prompts consistently outperformed polite ones in tests of AI tools, underscoring how tone may affect answer accuracy. The team assessed AI chatbots and large language models, including ChatGPT, by rewriting 50 school-subject questions into five tones from very polite to very rude, generating 250 prompts. Results showed 84.8% accuracy for very rude prompts, 82.2% for neutral, and 80.8% for very polite. The researchers noted that firm or challenging language and indirect speech were associated with stronger performance in these trials. Authors Om Dobariya and Akhil Kumar said more investigation is needed and that newer models may respond differently to tonal variation.