AI Adopters Club

AI Adopters Club

Your AI gives everyone the same answer. Here's how to get the good ones it's hiding.

One prompt change makes your proposals and memos stand out from competitors.

Kamil Banc's avatar
Kamil Banc
Dec 01, 2025
∙ Paid

Hey Adopter,

A Stanford team found that a single prompting change recovers most of the creative diversity that safety training stripped from your AI assistant. No retraining. No code. Copy one prompt template, and your brainstorming sessions get five times more raw material.

While others fear AI, paid subscribers climb higher & earn more, turning chaos into career leverage.


What the research proved

Researchers from Stanford and Northeastern tested why ChatGPT, Claude, and Gemini produce the same predictable outputs. The answer: humans rate familiar text higher than creative text during AI training. Cognitive psychologists call this the mere-exposure effect.

When AI companies use human feedback to make models safer and more helpful, they accidentally train the model to suppress unusual ideas. The AI collapses toward “stereotypical” responses because those are what raters preferred.

The research team tested this on real preference datasets. Holding response correctness constant, raters still favoured more predictable answers. The bias was statistically present across multiple models and datasets.

The good news: the creativity wasn’t deleted. It was suppressed. And a prompt change brings it back.

Get more from Kamil Banc in the Substack app
Available for iOS and Android

The research team’s prompt

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Kamil Banc
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture