February 9, 2026

Gleb Lisikh: Why Large Language Models Can’t Think – and Never Will

In its earlier days, artificial intelligence was often mocked for giving users false or even absurd answers. But AI was feared as well, not least for its potential to do more harm than good. As it has advanced, AI has become seemingly more reliable. But can it ever produce unbiased truth? Technology analyst Gleb Lisikh opens up the black box of the large language models underlying today’s proliferating AI apps to reveal the misunderstanding – or hoax – at the core of that question. LLMs cannot think, Lisikh explains in Part I of this two-part series – nor can they seek the truth – because they just aren’t designed to.

Love C2C Journal? Here's how you can help us grow.

More Videos

Share This Video

Donate

Subscribe to the C2C Weekly
It's Free!

* indicates required
Interests
By providing your email you consent to receive news and updates from C2C Journal. You may unsubscribe at any time.