Can a machine with no understanding be right, even when it happens to be correct?

We are using a lot of problematic and imprecise language where it comes to AI that writes, which is worsening our deep psychological tendency to assume that anything that shows glimmers of human-like traits ought to be imagined with a complex internal life and human-like thoughts, intentions, and behaviours.

We talk about ChatGPT and other large language models (LLMs) “being right” and “making mistakes” and “hallucinating things”.

The point I would raise is — if you have a system that sometimes gives correct answers, is it ever actually correct? Or does it just happen to give correct information in some cases, even though it has no ability to tell truth from falsehood, and even though it will just be random where it happens to be correct?

If you use a random number generator to pick a number from 1–10, and then ask that program over and over “What is 2+2?” you will eventually get a “4”. Is the 4 correct?

What is you have a program that always outputs “4” no matter what you ask it. Is it “correct” when you ask “What is 2+2?” and incorrect when you ask “What is 1+2?”?

Perhaps one way to lessen our collective confusion is to stick to AI-specific language. AI doesn’t write, get things correct, or make mistakes. It is a stochastic parrot with a huge reservoir of mostly garbage information from the internet, and it mindlessly uses known statistical associations between different language fragments to predict what ought to come next when parroting out some new text at random.

If you don’t like the idea that what you get from LLMs will be a mishmash of the internet’s collective wisdom and delusion, presided over by an utterly unintelligent word statistic expert, then you ought to be cautious about letting LLMs do your thinking for you, either as a writer or a reader.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford. Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.

Leave a Reply

Your email address will not be published. Required fields are marked *