Limits of ChatGPT

With the world discussing AI that writes, a recent post from Bret Devereaux at A Collection of Unmitigated Pedantry offers a useful corrective, both about how present-day large language models like GPT-3 and ChatGPT are far less intelligent and capable than naive users assume, and how they pose less of a challenge than feared to writing.

I would say the key point to take away is remembering that these systems are just a blender that mixes and matches words based on probability. They cannot understand the simplest thing, and so their output will never be authoritative or credible without manual human checking. As mix-and-matchers they can also never be original — only capable of emulating what is common in what they have already seen.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. Between 2005 and 2007 I completed an M.Phil in IR at Wadham College, Oxford. I worked for five years for the Canadian federal government, including completing the Accelerated Economist Training Program, and then completed a PhD in Political Science at the University of Toronto in 2023.

2 thoughts on “Limits of ChatGPT”

  1. “This really drove home for me the most flawed and misunderstood aspect of these tools: They do not “understand” anything. They are simply parroting things back to us based on statistical predictions derived from massive troves of internet data. I wasn’t reading my own eulogy—I was reading a machine-mediated abstraction of what a “eulogy” is, combining my inputs with digital echoes and reflections of what came before. This decoherence is what I imagine celebrated animator Hayao Miyazaki was getting at when he famously described an AI-generated animation as “an insult to life itself.””

Leave a Reply

Your email address will not be published. Required fields are marked *