My hand-crafted text guarantee

Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.

Now the world is abuzz with ChatGPT and its ilk of so-called articial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.

I will not.

While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.

To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.

My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.

My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.

It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.

TO360 wayfinding consultation

Though I had noticed some of their signage (and, without knowing it, their printed Toronto cycling map has been a key planning tool for our urban hikes), I did not actually know about the city’s TO360 wayfinding project until I saw a post about it a few days ago.

They are currently working on the Long Branch area west of Humber Bay, and held a consultation yesterday at the local library.

The consultation was unlike anything I have done, and really cool. Some knowledgeable local residents turned up, and the TO360 people had printed maps the size of large dinner tables where people could correct errors, note things that ought to be included, and suggest places where they should include custom graphics for something like a building or monument rather than a generic labelled marker. It’s awesome to see a group with so much capability and official support working to map the city from a non-driving perspective.

As shown on p. 11 of the slides, the group is working through the whole GTA as they are funded by the city. It would be neat to explore new areas as they focus on them and contribute to forthcoming consultations. The results won’t just be used for map posts on the street and map posters in subway stations, but also future versions of the cycling map.

Limits of ChatGPT

With the world discussing AI that writes, a recent post from Bret Devereaux at A Collection of Unmitigated Pedantry offers a useful corrective, both about how present-day large language models like GPT-3 and ChatGPT are far less intelligent and capable than naive users assume, and how they pose less of a challenge than feared to writing.

I would say the key point to take away is remembering that these systems are just a blender that mixes and matches words based on probability. They cannot understand the simplest thing, and so their output will never be authoritative or credible without manual human checking. As mix-and-matchers they can also never be original — only capable of emulating what is common in what they have already seen.