Mastodon

Remember, the computers don’t think

Posted by Matt Birchler
— 1 min read

Kyle Orland for ArsTechnica: Apple study exposes deep cracks in LLMs’ “reasoning” capabilities

The results of this new GSM-Symbolic paper aren't completely new in the world of AI research. Other recent papers have similarly suggested that LLMs don't actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

LLMs are amazing technology that are genuinely useful for some things, but I think people get lost when they start to talk about them “thinking” for themselves. Maybe there is a for of computer in the future that can “think” in the way we do, but I simply don’t see evidence that LLMs as they exist today are doing that.