AI that codes

I had been playing around with using Google’s Gemino 2.5 Pro LLM to make Python scripts for working with GPS files: for instance, adding data on the speed I was traveling at every point along recorded tracks.

The process is a bit awkward. The LLM doesn’t know exactly what system you are implementing the code in, which can lead to a lot of back and forth when commands and the code content aren’t completely right.

The other day, however, I noticed the ‘Build’ tab on the left side menu of Google’s AI Studio web interface. It provides a pretty amazing way to make an app from nothing, without writing any code. As a basic starting point, I asked for an app that can go through a GPX file with hundreds of hikes or bike rides, pull out the titles of all the tracks, and list them along with the dates they were recorded. This could all be done with command-line tools or self-written Python, but it was pretty amazing to watch for a couple of minutes while the LLM coded up a complete web app which produced the output that I wanted.

Much of this has been in service of a longstanding goal of adding new kinds of detail to my hike and biking maps, such as slowing the slope or speed at each point using different colours. I stepped up my experiment and asked directly for a web app that would ingest a large GPX and output a map colour coded by speed.

Here are the results for my Dutch bike rides:

And the mechanical Bike Share Toronto bikes:

I would prefer something that looks more like the output from QGIS, but it’s pretty amazing that it’s possible. It also had a remarkable amount of difficulty with the seemingly simple task of adding a button to zoom the extent of the map to show all the tracks, without too much blank space outside.

Perhaps the most surprising part was when at one point I submitted a prompt that the map interface was jittery and awkward. Without any further instructions it made a bunch of automatic code tweaks and suddenly the map worked much better.

It is really far, far from perfect or reliable. It is still very much in the dog-playing-a-violin stage, where it is impressive that it can be done at all, even if not skillfully.

Working on geoengineering and AI briefings

Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.

This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.

I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).

Related re: geoengineering:

Related re: AI:

NotebookLM on CFFD scholarship

I would have expected that by now someone would have written a comparative analysis on pieces of scholarly writing on the Canadian campus fossil fuel divestment movement: for instance, engaging with both Joe Curnow’s 2017 dissertation and mine from 2022.

So, I gave both public texts to NotebookLM to have it generate an audio overview. It wrongly assumes that Joe Curnow is a man throughout, and mangles the pronunciation of “Ilnyckyj” in a few different ways — but at least it acts like it has read about the texts and cares about their content.

It is certainly muddled in places (though perhaps in ways I have also seen in scholarly literature). For example, it treats the “enemy naming” strategy as something that arose through the functioning of CFFD campaigns, whereas it was really part of 350.org’s “campaign in a box” from the beginning.

This hints to me at how large language models are going to be transformative for writers. Finding an audience is hard, and finding an engaged audience willing to share their thoughts back is nigh-impossible, especially if you are dealing with scholarly texts hundreds of pages long. NotebookLM will happily read your whole blog and then have a conversation about your psychology and interpersonal style, or read an unfinished manuscript and provide detailed advice on how to move forward. The AI isn’t doing the writing, but providing a sort of sounding board which has never existed before: almost infinitely patient, and not inclined to make its comments all about its social relationship with the author.

I wonder what effect this sort of criticism will have on writing. Will it encourage people to hew more closely to the mainstream view, but providing a critique that comes from a general-purpose LLM? Or will it help people dig ever-deeper into a perspective that almost nobody shares, because the feedback comes from systems which are always artificially chirpy and positive, and because getting feedback this way removes real people from the process?

And, of course, what happens when the flawed output of these sorts of tools becomes public material that other tools are trained on?

Notebook LM on this blog for 2023 and 2024

I have been experimenting with Google’s NotebookLM tool, and I must say it has some uncanny capabilities. The one I have seen most discussed in the nerd press is the ability to create an automatic podcast with synthetic hosts and any material which you provide.

I tried giving it my last two years of blog content, and having it generate an audio overview with no additional prompts. The results are pretty thought-provoking.

AI image generation and the credibility of photos

When AI-assisted photo manipulation is easy to do and hard to detect, the credibility of photos as evidence is diminished:

No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside?

For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless — an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. Consider, for instance, that the last decade has seen extraordinary social upheaval in the United States sparked by grainy videos of police brutality. Where the authorities obscured or concealed reality, these videos told the truth.

Perhaps we will see a backlash against the trend where every camera is also a computer that tweaks the image to ‘improve’ it. For example, there could be cameras that generate a hash from the unedited image and retains it, allowing any subsequent manipulation to be identified.

Related:

Internet-ed

As of last night, my new dwelling has that most indespensible of features that makes a modern building a home: home internet and wifi.

I had been holding off due to my lack of income, but my brother Sasha asked me to give a remote presentation to his class and I have had enough of the stress of trying to leach Starbucks and Massey College wifi for important meetings.

A broad-ranging talk with James Burke

As part of promoting a new Connections series on Curiosity Stream launching on Nov. 9, I got the chance to interview historian of science and technology, science communicator, and series host James Burke:

The more interview-intensive part begins at 3:10.

Some documents from the history of fossil fuel divestment at the University of Toronto

Back in 2015, during the Toronto350.org / UofT350.org fossil fuel divestment campaign, I set up UofTFacultyDivest.com as a copy of what the Harvard campaign had up at harvardfacultydivest.com/.

The purposes of the site were to collect the attestations we needed for the formal university divestment policy, to have a repository of campaign-related documents, and to provide information about the campaign to anyone looking for it online.

The site was built with free WordPress software and plugins which have ceased to be compatible with modern web hosting, so I will re-list the important content here for the benefit of anyone seeking to learn about the campus fossil fuel divestment movement in the future:

Of course, U of T announced in 2021 that they would divest. Since then, the Climate Justice U of T group which developed out of the Leap Manifesto group which organized the second fossil fuel divestment campaign at U of T (after Toronto350 / UofT350) has succeeded in pressuring the federated colleges of St. Michael’s, Trinity, and Victoria University to divest as well.