America is demolishing its brain

From NASA to the National Science Foundation to the Centres for Disease Control to the educational system, the United States under the Trump administration is deconstructing its own ability to think and to comprehend the complex global situation. A whole fleet of spacecraft — each unique in human history — risks being scrapped because the country is ruled by an anti-science ideology. They are coming with particular venom for spacecraft intended to help us understand the Earth’s climate and how we are disrupting it. Across every domain of human life which science and medicine have improved, we are in the process of being pulled backwards by those who reject learning from the truth the universe reveals to us, in preference to ‘truths’ from religious texts which were assembled with little factual understanding in order to reassert and justify the prejudices of their creators.

The anti-science agenda will have a baleful influence on the young and America’s position in the world. In any country, you are liable to see nerds embracing the NASA logo and pictures of iconic spacecraft — a form of cultural cachet which serves America well in being perceived as a global leader. Now, when an American rover has intriguing signs of possible fossil life on Mars, there is little prospect that the follow-on sample return mission will be funded. Perhaps the near-term prospect of a Chinese human presence on the moon will bend the curve of political thought back toward funding space, though perhaps things will have further decayed by then.

The young are being doled out a double-dose of pain. As Christian nationalism and far-right ideology erode the value of the educational system (transitioning toward a Chinese-style system of memorizing the government’s official lies and doctrine rather than seeking truth through skeptical inquiry), young people become less able to cope in a future where a high degree of technical and scientific knowledge is necessary to comprehend and thrive in the world. Meanwhile, ideologues are ravaging the medical system and, of course, there is a tremendous intergenerational conflict brewing between the still-young and the soon-to-be-retired (if retirement continues to be a thing for any significant fraction of the population). Whereas we recently hoped for ever-improving health outcomes for everyone as technology advances, now there is a spectre of near-eradicated diseases re-emerging, in alliance with the antibiotic-resistant bacteria which we have so foolishly cultivated.

What’s happening is madness — another of the spasmodic reactionary responses to the Enlightenment and the Scientific Revolution which have been echoing for centuries. Unfortunately, it is taking place against the backdrop in which humanity is collectively choosing between learning to function as a planetary species and experiencing the catastrophe of civilizational collapse. Nuclear weapons have never posed a greater danger, and it exists alongside new risks from AI and biotechnology, and in a setting where the climate change which we have already locked in will continue to strain every societal system.

Perhaps I have watched too much Aaron Sorkin, but when I was watching the live coverage of the January 6th U.S. Capital take-over, I expected that once security forces had restored order politicians from both sides would condemn the political violence and wake up to the dangerousness of the far-right populist movement. When they instead jumped right back to partisan mudslinging, I concluded that the forces pulling the United States apart are stronger than those holding it together. There is a kind of implicit assumption about the science and tech world, that it will continue independently and separately regardless of the silliness that politicians are getting up to. This misses several things, including how America’s scientific strength is very much a government-created and government-funded phenomenon, going back to the second world war and beyond. It also misses the pan-societal ambition of the anti-science forces; they don’t want a science-free nook to sit in and read the bible, but rather to impose a theocratic society on everyone. That is the prospect now facing us, and the evidence so far is that the forces in favour of truth, intelligence, and tolerance are not triumphing.

Nuclear risks briefing

Along with the existential risk to humanity posed by unmitigated climate change, I have been seriously learning about and working on the threat from nuclear weapons for over 20 years.

I have written an introduction to nuclear weapon risks for ordinary people, meant to help democratize and de-mystify the key information.

The topic is incredibly timely and pertinent. A global nuclear arms race is ongoing, and the US and Canada are contemplating a massively increased commitment to the destabilizing technology of ballistic missile defence. If citizens and states could just comprehend that nuclear weapons endanger them instead of making them safe, perhaps we could deflect onto a different course. Total and immediate nuclear weapon abolition is implausible, but much could be done to make the situation safer and avoid the needless expenditure of trillions on weapons that will (in the best case) never be used.

Nuclear powers could recognize that history shows it only really takes a handful of bombs (minimal credible deterrence) to avert opportunistic attempts from enemies at decapitating attacks. States could limit themselves to the most survivable weapons, particularly avoiding those which are widely deployed where they could be stolen. They could keep warheads separate from delivery devices, to reduce the risk of accidental or unauthorized use. They could collectively renounce missile defences as useless against nuclear weapons. They could even share technologies and practices to make nuclear weapons safer, including designs less likely to detonate in fires and explosions, and which credibly cannot be used by anyone who steals them. Citizens could develop an understanding that nuclear weapons are shameful to possess, not impressive.

Even in academia and the media, everything associated with nuclear weapons tends to be treated as a priesthood where only the initiated, employed by the security state, are empowered to comment. One simple thing the briefing gets across is that all this information is sitting in library books. In a world so acutely threatened by nuclear weapons, people need the basic knowledge that allows them to think critically.

P.S. Since getting people to read the risk briefing has been so hard, my Rivals simulation is meant to repackage the key information about proliferation into a more accessible and interactive form.

Three heat wave densification rides

Here’s a bit of a neat animation which I put together showing three heat wave after-work rides this week.

The green, blue, and red tracks show my Dutch bike rides on Monday, Tuesday, and today.

The white tracks show every other Dutch (3,437km), Bike Share Toronto mechanical (2,522km), and loaner bike tracks (85km):

The streets I sought out are little visited because they tend to be inconvenient and not to serve as an effective way to get between places beyond. That does make them blessed with light traffic, and the large properties have some of Toronto’s most ancient and impressive urban trees.

It’s remarkable that even someone trying to explore can ride past the same streets over and over, within the densest part of their ride network.

This also marks over 6,040 km of mechanical bike exercise rides in Toronto.

AI that codes

I had been playing around with using Google’s Gemino 2.5 Pro LLM to make Python scripts for working with GPS files: for instance, adding data on the speed I was traveling at every point along recorded tracks.

The process is a bit awkward. The LLM doesn’t know exactly what system you are implementing the code in, which can lead to a lot of back and forth when commands and the code content aren’t completely right.

The other day, however, I noticed the ‘Build’ tab on the left side menu of Google’s AI Studio web interface. It provides a pretty amazing way to make an app from nothing, without writing any code. As a basic starting point, I asked for an app that can go through a GPX file with hundreds of hikes or bike rides, pull out the titles of all the tracks, and list them along with the dates they were recorded. This could all be done with command-line tools or self-written Python, but it was pretty amazing to watch for a couple of minutes while the LLM coded up a complete web app which produced the output that I wanted.

Much of this has been in service of a longstanding goal of adding new kinds of detail to my hike and biking maps, such as slowing the slope or speed at each point using different colours. I stepped up my experiment and asked directly for a web app that would ingest a large GPX and output a map colour coded by speed.

Here are the results for my Dutch bike rides:

And the mechanical Bike Share Toronto bikes:

I would prefer something that looks more like the output from QGIS, but it’s pretty amazing that it’s possible. It also had a remarkable amount of difficulty with the seemingly simple task of adding a button to zoom the extent of the map to show all the tracks, without too much blank space outside.

Perhaps the most surprising part was when at one point I submitted a prompt that the map interface was jittery and awkward. Without any further instructions it made a bunch of automatic code tweaks and suddenly the map worked much better.

It is really far, far from perfect or reliable. It is still very much in the dog-playing-a-violin stage, where it is impressive that it can be done at all, even if not skillfully.

Experiential education on nuclear weapon proliferation

I have been searching for ways to get people to engage with the risks to humanity created by nuclear weapons.

The whole issue seems to collide with the affect problem: the commonplace intuitive belief that talking about good or bad things causes them to happen, or simply the instinct to move away from and avoid unpleasant issues.

Pleasant or not, nuclear weapon issues need to be considered. With the US-led international security order smashed by Donald Trump’s re-election and extreme actions, the prospect of regional arms races in the Middle East and Southeast Asia has never been greater and the resulting risks have never been so consequential.

To try to get over the ‘unwilling to talk about it’ barrier, I have been writing an interactive roleplaying simulation on nuclear weapon proliferation called Rivals. I am working toward a full prototype and play-testing, and to that end I will be attending a series of RPG design workshops at next month’s Breakout Con conference in Toronto.

I am very much hoping to connect with people who are interested in both the issue of nuclear weapon proliferation and the potential of this simulation as a teaching tool.

Working on geoengineering and AI briefings

Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.

This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.

I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).

Related re: geoengineering:

Related re: AI:

On the potential of superfast minds

The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64

NotebookLM on CFFD scholarship

I would have expected that by now someone would have written a comparative analysis on pieces of scholarly writing on the Canadian campus fossil fuel divestment movement: for instance, engaging with both Joe Curnow’s 2017 dissertation and mine from 2022.

So, I gave both public texts to NotebookLM to have it generate an audio overview. It wrongly assumes that Joe Curnow is a man throughout, and mangles the pronunciation of “Ilnyckyj” in a few different ways — but at least it acts like it has read about the texts and cares about their content.

It is certainly muddled in places (though perhaps in ways I have also seen in scholarly literature). For example, it treats the “enemy naming” strategy as something that arose through the functioning of CFFD campaigns, whereas it was really part of 350.org’s “campaign in a box” from the beginning.

This hints to me at how large language models are going to be transformative for writers. Finding an audience is hard, and finding an engaged audience willing to share their thoughts back is nigh-impossible, especially if you are dealing with scholarly texts hundreds of pages long. NotebookLM will happily read your whole blog and then have a conversation about your psychology and interpersonal style, or read an unfinished manuscript and provide detailed advice on how to move forward. The AI isn’t doing the writing, but providing a sort of sounding board which has never existed before: almost infinitely patient, and not inclined to make its comments all about its social relationship with the author.

I wonder what effect this sort of criticism will have on writing. Will it encourage people to hew more closely to the mainstream view, but providing a critique that comes from a general-purpose LLM? Or will it help people dig ever-deeper into a perspective that almost nobody shares, because the feedback comes from systems which are always artificially chirpy and positive, and because getting feedback this way removes real people from the process?

And, of course, what happens when the flawed output of these sorts of tools becomes public material that other tools are trained on?