Open Process Manifesto

This document codifies and expresses some of my thinking on cooperation on complex problems, for the sake of the benefit of humanity and nature: Open Process Manifesto

It is based on the recognition of our universal fallibility, need to be comprehended, and to be able to share out tasks between people across space and time. To achieve those purposes, we need to be open about our reasoning and evidence, because that’s the way to treat others as intelligent partners who may be able to support the same cause through methods totally unknown and unavailable to you, across the world or centuries in the future.

America is demolishing its brain

From NASA to the National Science Foundation to the Centres for Disease Control to the educational system, the United States under the Trump administration is deconstructing its own ability to think and to comprehend the complex global situation. A whole fleet of spacecraft — each unique in human history — risks being scrapped because the country is ruled by an anti-science ideology. They are coming with particular venom for spacecraft intended to help us understand the Earth’s climate and how we are disrupting it. Across every domain of human life which science and medicine have improved, we are in the process of being pulled backwards by those who reject learning from the truth the universe reveals to us, in preference to ‘truths’ from religious texts which were assembled with little factual understanding in order to reassert and justify the prejudices of their creators.

The anti-science agenda will have a baleful influence on the young and America’s position in the world. In any country, you are liable to see nerds embracing the NASA logo and pictures of iconic spacecraft — a form of cultural cachet which serves America well in being perceived as a global leader. Now, when an American rover has intriguing signs of possible fossil life on Mars, there is little prospect that the follow-on sample return mission will be funded. Perhaps the near-term prospect of a Chinese human presence on the moon will bend the curve of political thought back toward funding space, though perhaps things will have further decayed by then.

The young are being doled out a double-dose of pain. As Christian nationalism and far-right ideology erode the value of the educational system (transitioning toward a Chinese-style system of memorizing the government’s official lies and doctrine rather than seeking truth through skeptical inquiry), young people become less able to cope in a future where a high degree of technical and scientific knowledge is necessary to comprehend and thrive in the world. Meanwhile, ideologues are ravaging the medical system and, of course, there is a tremendous intergenerational conflict brewing between the still-young and the soon-to-be-retired (if retirement continues to be a thing for any significant fraction of the population). Whereas we recently hoped for ever-improving health outcomes for everyone as technology advances, now there is a spectre of near-eradicated diseases re-emerging, in alliance with the antibiotic-resistant bacteria which we have so foolishly cultivated.

What’s happening is madness — another of the spasmodic reactionary responses to the Enlightenment and the Scientific Revolution which have been echoing for centuries. Unfortunately, it is taking place against the backdrop in which humanity is collectively choosing between learning to function as a planetary species and experiencing the catastrophe of civilizational collapse. Nuclear weapons have never posed a greater danger, and it exists alongside new risks from AI and biotechnology, and in a setting where the climate change which we have already locked in will continue to strain every societal system.

Perhaps I have watched too much Aaron Sorkin, but when I was watching the live coverage of the January 6th U.S. Capital take-over, I expected that once security forces had restored order politicians from both sides would condemn the political violence and wake up to the dangerousness of the far-right populist movement. When they instead jumped right back to partisan mudslinging, I concluded that the forces pulling the United States apart are stronger than those holding it together. There is a kind of implicit assumption about the science and tech world, that it will continue independently and separately regardless of the silliness that politicians are getting up to. This misses several things, including how America’s scientific strength is very much a government-created and government-funded phenomenon, going back to the second world war and beyond. It also misses the pan-societal ambition of the anti-science forces; they don’t want a science-free nook to sit in and read the bible, but rather to impose a theocratic society on everyone. That is the prospect now facing us, and the evidence so far is that the forces in favour of truth, intelligence, and tolerance are not triumphing.

Nuclear risks briefing

Along with the existential risk to humanity posed by unmitigated climate change, I have been seriously learning about and working on the threat from nuclear weapons for over 20 years.

I have written an introduction to nuclear weapon risks for ordinary people, meant to help democratize and de-mystify the key information.

The topic is incredibly timely and pertinent. A global nuclear arms race is ongoing, and the US and Canada are contemplating a massively increased commitment to the destabilizing technology of ballistic missile defence. If citizens and states could just comprehend that nuclear weapons endanger them instead of making them safe, perhaps we could deflect onto a different course. Total and immediate nuclear weapon abolition is implausible, but much could be done to make the situation safer and avoid the needless expenditure of trillions on weapons that will (in the best case) never be used.

Nuclear powers could recognize that history shows it only really takes a handful of bombs (minimal credible deterrence) to avert opportunistic attempts from enemies at decapitating attacks. States could limit themselves to the most survivable weapons, particularly avoiding those which are widely deployed where they could be stolen. They could keep warheads separate from delivery devices, to reduce the risk of accidental or unauthorized use. They could collectively renounce missile defences as useless against nuclear weapons. They could even share technologies and practices to make nuclear weapons safer, including designs less likely to detonate in fires and explosions, and which credibly cannot be used by anyone who steals them. Citizens could develop an understanding that nuclear weapons are shameful to possess, not impressive.

Even in academia and the media, everything associated with nuclear weapons tends to be treated as a priesthood where only the initiated, employed by the security state, are empowered to comment. One simple thing the briefing gets across is that all this information is sitting in library books. In a world so acutely threatened by nuclear weapons, people need the basic knowledge that allows them to think critically.

P.S. Since getting people to read the risk briefing has been so hard, my Rivals simulation is meant to repackage the key information about proliferation into a more accessible and interactive form.

Fiction, versus reality’s lack of resolution

In all the time while I have been concerned, and later terrified, about climate change and the future of life on Earth, I still had the narrative convention of fiction influencing my expectations: the emergence of a big problem will imperil and inspire a group of people to find solutions and eventually the people threatened by the problem will accept if not embrace the solutions. A tolerable norm is disrupted and then restored because people have the ability to perceive and reason, and the willingness and virtue to act appropriately when they see what’s wrong.

Now, I feel acutely confronted by what a bad model for human reactions this is. It seems to me now that we almost never want to understand problems or their real causes; we almost always prefer an easy answer and somebody to blame. The narrative arc of ‘problem emerges, people understand problem, people solve problem’ has a real-world equivalent more like ‘problems emerge but people usually miss or misunderstand them, and where they do perceive problems to exist they interpret them using stories where the most important purpose is to justify and protect the powerful’.

If the history happening around us were a movie, it might be one that I’d want to walk out of, between the unsatisfying plot and the unsympathetic actors. Somehow the future has come to feel more like a sentence than a promise: something which will need to be endured, watching everything good that humankind has achieved getting eroded and destroyed, and in which having the ability to understand and name what is happening just leads to those around you punishing and rejecting you by reflex.

The uncertainty principle and limits of knowledge

[Heisenberg and Bohr] left the park and plunged into the city streets while they discussed the consequences of Heisenberg’s discovery, which Bohr saw as the cornerstone upon which a truly new physics could be founded. In philosophical terms, he told him as he took his arm, this was the end of determinism. Heisenberg’s uncertainty principle shredded the hopes of all those who had put faith in the clockwork universe Newtonian physics had promised. According to the determinists, if one could reveal the laws that governed matter, one could reach back to the most archaic past and predict the most distant future. If everything that occurred was the direct consequence of a prior state, then merely by looking at the present and running the equations it would be possible to achieve a godlike knowledge of the universe. These hopes were shattered in light of Heisenberg’s discovery: what was beyond our grasp was neither the future nor the past, but the present itself. Not even the state of one miserable particle could be perfectly apprehended. However much we scrutinized the fundamentals, there would always be something vague, undetermined, uncertain, as if reality allowed us to perceive the world with crystalline clarity with one eye at a time, but never with both.

Labatut, Benjamín. When We Cease to Understand the World. New York Review of Books, 2020. p. 161–2

Carney on the carbon bubble and stranded assets

By some measures, based on science, the scale of the energy revolution required is staggering.

If we had started in 2000, we could have hit the 1.5°C objective by halving emissions every thirty years. Now, we must halve emissions every ten years. If we wait another four years, the challenge will be to halve emissions every year. If we wait another eight years, our 1.5°C carbon budget will be exhausted.

The entrepreneur and engineer Saul Griffith argues that the carbon-emitting properties of our committed physical capital mean that we are locked in to use up the residual carbon budget, even if no one buys another car with an internal combustion engine, installs a new gas-fired hot-water heater or, at a larger scale, constructs a new coal power plant. That’s because, just as we expect a new car to run for a decade or more, we expect our machines to be used until they are fully depreciated. If the committed emissions of all the machines over their useful lives will largely exhaust the 1.5°C carbon budget, going forward we will need almost all new machines, like cars, to be zero carbon. Currently, electric car sales, despite being one of the hottest segments of the market, are as a percentage in single digits. This implies that, if we are to meet society’s objective, there will be scrappage and stranded assets.

To meet the 1.5°C target, more than 80 per cent of current fossil fuel reserves (including three-quarters of coal, half of gas, one-third of oil) would need to stay in the ground, stranding these assets. The equivalent for less than 2°C is about 60 per cent of fossil fuel assets staying in the ground (where they would no longer be assets).

When I mentioned the prospect of stranded assets in a speech in 2015, it was met with howls of outrage from the industry. That was in part because many had refused to perform the basic reconcilliation between the objectives society had agreed in Paris (keeping temperature increases below 2°C), the carbon budgets science estimated were necessary to achieve them and the consequences this had for fossil fuel extraction. They couldn’t, or wouldn’t, undertake the basic calculations that a teenager, Greta Thunberg, would easily master and powerfully project. Now recognition is growing, even in the oil and gas industry, that some fossil fuel assets will be stranded — although, as we shall see later in the chapter, pricing in financial markets remains wholly inconsistent with the transition.

Carney, Mark. Value(s): Building a Better World for All. Penguin Random House Canada, 2021. p. 273–4, 278

Working on geoengineering and AI briefings

Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.

This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.

I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).

Related re: geoengineering:

Related re: AI:

On the potential of superfast minds

The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64

General artificial intelligences will be aliens

[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35