(From WSJ article on Yann LeCunn)
...The generative-AI boom has been powered by large language models and 
similar systems that train on oceans of data to mimic human expression. 
As each generation of models has become much more powerful, some experts
 have concluded that simply pouring more chips and data into developing 
future 
AIs will make them ever more capable, ultimately matching or 
exceeding human intelligence. This is the logic behind much of the 
massive investment in building ever-greater pools of specialized chips 
to train AIs.
LeCun thinks that
 the problem with today’s AI systems is how they are designed, not their
 scale. No matter how many GPUs tech giants cram into data centers 
around the world, he says, today’s AIs aren’t going to get us artificial
 general intelligence.
His
 bet is that research on AIs that work in a fundamentally different way 
will set us on a path to human-level intelligence. These hypothetical 
future AIs could take many forms, but work being done at FAIR to digest 
video from the real world is among the projects that currently excite 
LeCun. The idea is to create models that learn in a way that’s analogous
 to how a baby animal does, by building a world model from the visual 
information it takes in.
The
 large language models, or LLMs, used for ChatGPT and other bots might 
someday have only a small role in systems with common sense and 
humanlike abilities, built using an array of other techniques and 
algorithms.
Today’s 
models are really just predicting the next word in a text, he says. But 
they’re so good at this that they fool us. And because of their enormous
 memory capacity, they can seem to be reasoning, when in fact they’re 
merely regurgitating information they’ve already been trained on.
“We
 are used to the idea that people or entities that can express 
themselves, or manipulate language, are smart—but that’s not true,” says
 LeCun. “You can manipulate language and not be smart, and that’s 
basically what LLMs are demonstrating.”