A broad-ranging talk with James Burke

As part of promoting a new Connections series on Curiosity Stream launching on Nov. 9, I got the chance to interview historian of science and technology, science communicator, and series host James Burke:

The more interview-intensive part begins at 3:10.

Some documents from the history of fossil fuel divestment at the University of Toronto

Back in 2015, during the Toronto350.org / UofT350.org fossil fuel divestment campaign, I set up UofTFacultyDivest.com as a copy of what the Harvard campaign had up at harvardfacultydivest.com/.

The purposes of the site were to collect the attestations we needed for the formal university divestment policy, to have a repository of campaign-related documents, and to provide information about the campaign to anyone looking for it online.

The site was built with free WordPress software and plugins which have ceased to be compatible with modern web hosting, so I will re-list the important content here for the benefit of anyone seeking to learn about the campus fossil fuel divestment movement in the future:

Of course, U of T announced in 2021 that they would divest. Since then, the Climate Justice U of T group which developed out of the Leap Manifesto group which organized the second fossil fuel divestment campaign at U of T (after Toronto350 / UofT350) has succeeded in pressuring the federated colleges of St. Michael’s, Trinity, and Victoria University to divest as well.

DeSilva and Harvey-Sànchez divestment podcast series complete

The fifth and final episode in Amanda Harvey-Sànchez and Julia DeSilva’s series on the University of Toronto fossil fuel divestment campaign, successively organized by Toronto350.org, UofT 350.org, and then the Leap Manifesto and Divestment & Beyond groups.

The episode brings back guests from each prior era, and includes some interesting reflections on what organizers from different eras felt they learned, the value of protest as an empowerment space and venue for inter-activist networking, the origins of the Leap Manifesto group in the aftermath of the 2016 rejection, as well as how they explain President Gertler’s decision to reverse himself and divest five years after he rejected the Toronto350.org campus fossil fuel divestment campaign.

Threads on previous episodes:

Can a machine with no understanding be right, even when it happens to be correct?

We are using a lot of problematic and imprecise language where it comes to AI that writes, which is worsening our deep psychological tendency to assume that anything that shows glimmers of human-like traits ought to be imagined with a complex internal life and human-like thoughts, intentions, and behaviours.

We talk about ChatGPT and other large language models (LLMs) “being right” and “making mistakes” and “hallucinating things”.

The point I would raise is — if you have a system that sometimes gives correct answers, is it ever actually correct? Or does it just happen to give correct information in some cases, even though it has no ability to tell truth from falsehood, and even though it will just be random where it happens to be correct?

If you use a random number generator to pick a number from 1–10, and then ask that program over and over “What is 2+2?” you will eventually get a “4”. Is the 4 correct?

What is you have a program that always outputs “4” no matter what you ask it. Is it “correct” when you ask “What is 2+2?” and incorrect when you ask “What is 1+2?”?

Perhaps one way to lessen our collective confusion is to stick to AI-specific language. AI doesn’t write, get things correct, or make mistakes. It is a stochastic parrot with a huge reservoir of mostly garbage information from the internet, and it mindlessly uses known statistical associations between different language fragments to predict what ought to come next when parroting out some new text at random.

If you don’t like the idea that what you get from LLMs will be a mishmash of the internet’s collective wisdom and delusion, presided over by an utterly unintelligent word statistic expert, then you ought to be cautious about letting LLMs do your thinking for you, either as a writer or a reader.

Ologies on invisibility

Alie Ward’s marvellous science communication podcast has a new episode on invisibility: Invisible Photology (INVISIBILITY CLOAKS) with Dr. Greg Gbur.

I was just about bowled over during my exercise walk on the Beltline trail, when Alie and Dr. Gbur discussed my Hyperface Halloween costume, designed to confound facial recognition systems.

Tomorrow I will read Dr. Gbur’s latest book: Invisibility: The History and Science of How Not To Be Seen.

6 million views on Flickr

One of the reasons I have always found the internet so exciting is because it facilitates a kind of interaction which I find a bit magical: any time it is possible for somebody to help someone out by using information or material which has been publicly shared. That way, the person who needs help can make use of the photo or follow the instructions without any need to correspond with the person helping them, or for that helper to even still be alive.

My photography is released under a Creative Commons license to facilitate such usage. My usage guide explains the generous set of things you can do for free, including making prints for personal use or for inclusion in anything you aren’t selling.

History belongs to future generations

I disagree with the fundamental notion inherent to the supposed “right to be forgotten”, which is the presumption that the main and most important purpose of documenting world events is to depict your life history in an autobiographical sense. My conviction is that history belongs not to the subjects who it is about, but to the future generations who will need to use it to understand their own situations and solve their own problems. When we censor the future out of vanity or even out of compassion for errors long-atoned for, we may be denying something important to the future. We act as the benefactors of those in future generations by preserving what ordered and comprehensible information may eventually survive from our era, and we should distort it as little as possible. The world is so complex that events are impossible to understand while they are happening. The accounts and records we preserve are the clay which through careful work historians may later turn into bricks. We should not pre-judge what they should find important or what they ought to hear.

The trace we each leave on the broader world during our brief lives is important to other people, and the importance of them being well-informed to confront the unforeseeable but considerable challenges they confront outweighs our own interests as people to be remembered in as positive a light as possible, even if that requires omission and/or deception.

My hand-crafted text guarantee

Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.

Now the world is abuzz with ChatGPT and its ilk of so-called articial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.

I will not.

While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.

To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.

My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.

My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.

It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.