The fifth and final episode in Amanda Harvey-Sànchez and Julia DeSilva’s series on the University of Toronto fossil fuel divestment campaign, successively organized by Toronto350.org, UofT 350.org, and then the Leap Manifesto and Divestment & Beyond groups.
The episode brings back guests from each prior era, and includes some interesting reflections on what organizers from different eras felt they learned, the value of protest as an empowerment space and venue for inter-activist networking, the origins of the Leap Manifesto group in the aftermath of the 2016 rejection, as well as how they explain President Gertler’s decision to reverse himself and divest five years after he rejected the Toronto350.org campus fossil fuel divestment campaign.
Threads on previous episodes:
- Intro: Podcast series on fossil fuel divestment at the University of Toronto
- Episode 1: Podcast episode about the early U of T fossil fuel divestment campaign
- Episode 2: New podcast on the U of T divestment campaign from 2014 to 2016
- Episode 3: Available on Spotify
- Episode 4: Divestment generation podcast 4
We are using a lot of problematic and imprecise language where it comes to AI that writes, which is worsening our deep psychological tendency to assume that anything that shows glimmers of human-like traits ought to be imagined with a complex internal life and human-like thoughts, intentions, and behaviours.
We talk about ChatGPT and other large language models (LLMs) “being right” and “making mistakes” and “hallucinating things”.
The point I would raise is — if you have a system that sometimes gives correct answers, is it ever actually correct? Or does it just happen to give correct information in some cases, even though it has no ability to tell truth from falsehood, and even though it will just be random where it happens to be correct?
If you use a random number generator to pick a number from 1–10, and then ask that program over and over “What is 2+2?” you will eventually get a “4”. Is the 4 correct?
What is you have a program that always outputs “4” no matter what you ask it. Is it “correct” when you ask “What is 2+2?” and incorrect when you ask “What is 1+2?”?
Perhaps one way to lessen our collective confusion is to stick to AI-specific language. AI doesn’t write, get things correct, or make mistakes. It is a stochastic parrot with a huge reservoir of mostly garbage information from the internet, and it mindlessly uses known statistical associations between different language fragments to predict what ought to come next when parroting out some new text at random.
If you don’t like the idea that what you get from LLMs will be a mishmash of the internet’s collective wisdom and delusion, presided over by an utterly unintelligent word statistic expert, then you ought to be cautious about letting LLMs do your thinking for you, either as a writer or a reader.
Alie Ward’s marvellous science communication podcast has a new episode on invisibility: Invisible Photology (INVISIBILITY CLOAKS) with Dr. Greg Gbur.
I was just about bowled over during my exercise walk on the Beltline trail, when Alie and Dr. Gbur discussed my Hyperface Halloween costume, designed to confound facial recognition systems.
Tomorrow I will read Dr. Gbur’s latest book: Invisibility: The History and Science of How Not To Be Seen.
One of the reasons I have always found the internet so exciting is because it facilitates a kind of interaction which I find a bit magical: any time it is possible for somebody to help someone out by using information or material which has been publicly shared. That way, the person who needs help can make use of the photo or follow the instructions without any need to correspond with the person helping them, or for that helper to even still be alive.
My photography is released under a Creative Commons license to facilitate such usage. My usage guide explains the generous set of things you can do for free, including making prints for personal use or for inclusion in anything you aren’t selling.
I disagree with the fundamental notion inherent to the supposed “right to be forgotten”, which is the presumption that the main and most important purpose of documenting world events is to depict your life history in an autobiographical sense. My conviction is that history belongs not to the subjects who it is about, but to the future generations who will need to use it to understand their own situations and solve their own problems. When we censor the future out of vanity or even out of compassion for errors long-atoned for, we may be denying something important to the future. We act as the benefactors of those in future generations by preserving what ordered and comprehensible information may eventually survive from our era, and we should distort it as little as possible. The world is so complex that events are impossible to understand while they are happening. The accounts and records we preserve are the clay which through careful work historians may later turn into bricks. We should not pre-judge what they should find important or what they ought to hear.
The trace we each leave on the broader world during our brief lives is important to other people, and the importance of them being well-informed to confront the unforeseeable but considerable challenges they confront outweighs our own interests as people to be remembered in as positive a light as possible, even if that requires omission and/or deception.
Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.
Now the world is abuzz with ChatGPT and its ilk of so-called articial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.
I will not.
While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.
To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.
My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.
My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.
It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.
With the world discussing AI that writes, a recent post from Bret Devereaux at A Collection of Unmitigated Pedantry offers a useful corrective, both about how present-day large language models like GPT-3 and ChatGPT are far less intelligent and capable than naive users assume, and how they pose less of a challenge than feared to writing.
I would say the key point to take away is remembering that these systems are just a blender that mixes and matches words based on probability. They cannot understand the simplest thing, and so their output will never be authoritative or credible without manual human checking. As mix-and-matchers they can also never be original — only capable of emulating what is common in what they have already seen.
Back in November, Amanda Harvey-Sánchez and Julia DaSilva released a podcast episode for Climate Justice Toronto about the first generation of fossil fuel divestment organizers at U of T. That episode covered from the inception of the campaign in 2012 until the People’s Climate March (PCM) in New York City in September 2014.
They have now released the second episode, which features Katie Krelove, Ben Donato-Woodger, Keara Lightning, and Ariel Martz-Oberlander, and which discussed the period from the PCM until president Meric Gertler’s rejection of divestment in March 2016.