You can persuade AI models to accept falsehoods as truth, study shows
Large language models can uphold falsehoods they or human users state, despite being presented with evidence to the contrary.
2 LEFT · 0 CENTER · 0 RIGHT · 9h ago
Tech Crunch and The Conversation US frame the same story with noticeably different headline language.
IN 30 SECONDS
MAIN REPORTED CLAIM
WHAT CHANGED
The Conversation US leads with "You can persuade AI models to accept falsehoods as truth, study shows" while TechCrunch leads with "Osaurus brings both local and cloud AI models to your Mac".
The source map is still incomplete. The wording gap is useful, but it needs more coverage from the missing bucket before it should drive a strong conclusion.
Large language models can uphold falsehoods they or human users state, despite being presented with evidence to the contrary.
How this could be misread: A high Wording Gap does not prove one side is wrong. It means the headline language creates a different first impression.
SOURCE MAP CHANGES
May 15, 12:19 PM: TechCrunch joined the source map.
May 15, 12:42 PM: The Conversation US joined the source map.
Now: Wording Gap is 66/99 and story health is developing · 2 sources · 1 bucket.
WHAT EACH SIDE EMPHASIZES
The Conversation US · Center-left · News report
Optics keeps watching for pickup.
Optics keeps watching for pickup.
VISIBLE SOURCES
Large language models can uphold falsehoods they or human users state, despite being presented with evidence to the contrary.
Osaurus combines local and cloud AI models in a Mac app that keeps users’ memory, files, and tools on their own hardware.