
AI Thinks Like Us – Flaws and All: New Study Finds ChatGPT Mirrors Human Decision Biases in Half the Tests
BALTIMORE, MD, April 1, 2025 – Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations – showing biases like overconfidence of hot-hand (gambler’s) fallacy – yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).