← Bonus Resources / How to Read AI News Without Being Misled
Bonus Resource · 06

How to Read AI News Without Being Misled

A practical reader's guide: what "breakthrough" usually means, how to spot hype, and which sources are actually worth your attention.

AI coverage is extraordinary — in both senses of the word. Some of the most important technology stories of our era are being reported in real time. But the same stories are frequently misframed, overstated, or quietly funded by the companies being covered. Here's how to read more carefully.

Eight Red Flags in AI Coverage

These aren't signs that a story is wrong — they're signals to slow down and read more carefully.

A Vocabulary of Vagueness

Certain words in AI coverage carry more implication than information. Here's what they often actually mean.

"Understands"
Produces outputs consistent with having processed X. Doesn't imply comprehension in the human sense.
"Thinks" / "Reasons"
Applies multi-step processing to produce an output. The philosophical question of whether this constitutes thinking is not resolved.
"Human-level"
Matches average human performance on a specific task under specific conditions. Doesn't generalize to other tasks or conditions.
"Hallucination"
The model produces a plausible-sounding but false output. Not analogous to human hallucination — it's a failure mode of the prediction mechanism.
"AGI"
Artificial General Intelligence — no agreed definition. Used by different people to mean very different things. When you see it, check what the speaker means specifically.
"Sentient" / "Conscious"
No scientific consensus on what these mean for AI. Claims in either direction (definitely conscious / definitely not) exceed the current evidence.
"Safe AI"
Varies enormously by context. Can mean: won't produce harmful content, aligned with human values, won't cause societal harm, won't pose existential risk. These are different things.
"Emergent"
A capability that appeared at scale without being explicitly trained for. Genuine phenomenon, but sometimes used loosely to describe any surprising capability.

A Rough Hierarchy of Sources

Not all AI coverage is equal. Here's a rough tiering, not as a definitive ranking but as a starting framework for evaluating what you're reading.

1
Primary research papers (with replication)

The gold standard. Read the abstract and conclusion even if you skip the methods. Check whether it's been independently replicated. Preprints (not yet peer-reviewed) are one step below published papers.

1
Technical journalism with named expert sources

MIT Technology Review, Wired (technical pieces), 80,000 Hours, LessWrong (for safety debates), The Alignment Forum. Writers who name their sources and engage with technical detail.

2
Quality general journalism

The New York Times, The Guardian, FT — when they're reporting on AI with named researchers and independent verification. Good for context and implications; occasionally shaky on technical specifics.

2
Lab blog posts

Useful primary sources for what labs say about themselves. Read with appropriate skepticism — they're communications strategies, not independent assessments. Often technically detailed and worth reading, but not neutral.

!
Social media, newsletters, YouTube "analysis"

Huge range. Some excellent, much poor. The incentive structure favors engagement over accuracy. Apply all red flags listed above. Never let social media be your only source on a consequential AI claim.

!
Press releases rewritten as news

Common, especially for smaller publications. If the story could have been written without any reporting — just by reading the company's announcement — it probably was.

A Quick Checklist for Any AI Story