In its earlier days, artificial intelligence was often mocked for giving users false or even absurd answers. But AI was feared as well, not least for its potential to do more harm than good. As it has advanced, AI has become seemingly more reliable. But can it ever produce unbiased truth? Technology analyst Gleb Lisikh opens up the black box of the large language models underlying today’s proliferating AI apps to reveal the misunderstanding – or hoax – at the core of that question. LLMs cannot think, Lisikh explains in Part I of this two-part series – nor can they seek the truth – because they just aren’t designed to.
Bruce Pardy: Articles of Freedom: What the Constitution of an Independent Alberta Should Look Like
Alberta separatism is often dismissed – even within the province itself – as the domain of a few deluded rural hardliners. But the sentiment and