Sam Lewis recently authored a Daily Business Review article discussing the limitations of large language models (LLMs) like GPT-4.5, noting that while they can convincingly simulate human conversation and pass the Turing Test, they still lack true abstract thought and often generate false or misleading information—known as hallucinations. These hallucinations pose serious risks, as illustrated by a legal case in which AI-generated content led to fabricated citations in court, underscoring the need for human verification and caution in relying on AI for critical tasks.
Sam emphasized the current risks of LLMs, stating: “Personally, I believe that AIs are still in their infancy and that the future of AI is seemingly limitless. However, the hacker in me recognizes that these are systems so complex that their developers are unable to fully understand, much less fix, problems like hallucinations. Until such problems can be resolved, these systems will not be the trusted assistants that we would like them to be.”
To read more, click here.