Fissure in the Beaufort ice pack

During the past month, a massive piece of ice has broken off west of Banks Island, in the Canadian Arctic. This picture shows the area in question, while this animation from the US National Oceanic and Atmospheric Administration. The split left open water in the Bering Strait for 45 days. At the same time as the fissure, there was an unusual 45 day period of open water in the Bering Strait.

For a sense of scale, here is a map showing Banks Island in relation to the rest of Canada. While one event of this kind cannot be understood without comparison to what is happening in other areas and what has happened at other times, it is a reminder of the dynamic character of the polar icecap, even in the middle of winter. According to NOAA’s 2007 Arctic Report Card, anomolously high temperatures are yielding “relatively younger, thinner ice cover” which is “intrinsically more susceptible to the effects of atmospheric and oceanic forcing.”

It will be fascinating to see what happens the the icecap next summer: specifically, how the level of ice cover will compare to the shocking minimum in the summer of 2007.

[Correction: 15 January 2008] The open water in the Bering Sea is unrelated to this fissure, though both took place at the same time. Both pieces of information are listed in this report from the Canadian Ice Service.

Canada’s nuclear waste

Hilary McNaughton at Darma’s Kitchen

After being removed from a reactor, nuclear fuel is both too radioactive and too physically hot to be reprocessed or placed in dry storage. As such, it is kept in cooling pools for a period of five to six years. Given the absence of long-term geologic storage facilities, all of Canada’s high level waste is currently in cooling pools or on-site dry cask storage. On a per-capita basis, Canada produces more high level nuclear waste than any other state – a total of 1,300 tonnes in 2001.

Canada currently has eleven nuclear waste storage facilities. Among these, one is in the process of decommissioning and six contain high level waste. Four sites have waste in dry storage casks: Darlington, Bruce, Pickering, Gentilly, and Point Lepreau. Other facilities include spent fuel pools. According to the Canadian Nuclear Safety Commission (CNSC), all Canadian wastes are currently in ‘storage’ defined as: “a short-term management technique that requires human intervention for maintenance and security and allows for recovery of the waste.”

In 2002, a major review of waste disposal options was undertaken by the Nuclear Waste Management Organization (NWMO). Their final report – released in November 2005 – endorsed a system of “Adaptive Phased Management” employing both interim shallow storage and deep geological storage, with the possibility of future recovery of materials. Such recovery would be motivated either by concerns about leakage potential or a desire to process the fuel into something useful. The NWMO is currently engaged in a process of site selection, intended to lead eventually to a National Nuclear Waste Repository.

The nuclear waste problem

From both an environmental and public support standpoint, the generation of nuclear waste is one of the largest drawbacks of nuclear fission as a power source. Just as the emission of greenhouse gasses threatens future generations with harmful ecological outcomes, the production of nuclear wastes at all stages in the fuel cycle presents risks to those alive in the present and to those who will be alive in the future, across a span of time not generally considered by human beings.

Wastes like Plutonium-239 remain highly dangerous for tens of millennia: a span roughly equivalent to the total historical record of human civilizations. Furthermore, while most states using nuclear power have declared an intention of creating geological repositories for wastes, no state has such a facility in operation. The decades-long story of the planned Yucca Mountain repository in the United States demonstrates some of the practical, political, and legal challenges to establishing such facilities in democratic societies.

Dry cask storage is not an acceptable long-term option, as suggested by its CNSC categorization as “a short-term management technique.” When dealing with wastes dangerous for millennia, it cannot be assumed that regular maintenance and inspection will continue. Storage systems must be ‘passively safe:’ able to contain the wastes they store for the full duration of their dangerous lives, without the need for active intervention from human beings. To date, no such facilities exist.

The sex life of corn

Corn, the key species in modern industrial agriculture, is completely incapable of reproducing itself in nature. The cobs that concentrate the seeds so nicely for us are not conducive to reproduction because, if planted, the corn grows so densely it dies. As such, the continued existence of Zea mays depends upon people continuing to divide the cobs and plant a portion of the seeds.

Corn is apparently a descendant of an earless grass called Teosinte. It is hard to overstate the consequences of a heavily mutated strain of Teosinte finding a species capable of closing a reproductive loop that would otherwise be open, leading to swift extinction.

The actual mechanics of corn reproduction are similarly odd. Male gametes are produced at the top of the plant, inside the flower-like tassel. At a certain time of year, these release the pollen that fertilizes the female gametes located in the cobs. It reaches them through single strands of silk (called styles) that run through the husk. When a grain of pollen comes into contact with one of these threads it divides into two identical cells. One of them tunnels through the strand into the kernel, a six to eight inch distance crossed in several hours. The other fuses with an egg to form an embryo, while the digger grows into the endosperm.

Another curious aspect of corn reproduction is that, because of seed hybridization (not genetic modification), every stalk of corn in a field is a clone of every other stalk. This is because the seeds came from inbred lines: each made to self-pollinate for several generations, eventually yielding batches of genetically identical seeds that farmers buy every year. They do this because the yield from the identical seeds is higher than that from the mixed generation that would follow it by a degree sufficient to justify the cost of buying seeds.

Such hybrid corn pushed yields from twenty bushels an acre – the amount managed by both Native Americans and farmers in the 1920s – to about two hundred bushels an acre. Given the degree to which we are all constructed more from corn than from any other source of materials (most of the meat, milk, and cheese we eat is ultimately made from corn, as are tons of processed foods), these remarkable processes of reproduction and agriculture deserve further study. For my part, I am reading Michael Pollan’s The Omnivore’s Dilemma. I am only 10% into it, but it has been quite fascinating so far.

Per capita emissions and fairness

Per capita emissions by state, compared with sustainable emissions

As mentioned before, the Stern Review cites a figure of five gigatonnes of carbon dioxide equivalent as the quantity that can be sustainably absorbed by the planet each year. Given the present population of 6.6 billion people, that means our fair share is about 750kg of emissions each, per year. Right now, Canadian emissions are about 23 tonnes per person per year. They are highest in Alberta – 71 tonnes – and lowest in Quebec – 12 tonnes. Even in hydro-blessed Quebec, emissions are fifteen times too high.

Everybody knows that emissions in the developed world are too high. The average Australian emits 25.9 tonnes. For Americans it is 22.9; the nuclear-powered French emit 8.7 tonnes each. The European average is 10.6 tonnes per person, while North America weighs in at 23.1. One round-trip flight from New York to London produces the amount of greenhouse gas that one person can sustainably emit in three and a half years. These are not the kind of numbers that can be brought down with a few more wind turbines and hybrid cars; the energy basis of all states needs to be fundamentally altered, replacing a system where energy production and use are associated with greenhouse gas emissions with one where that is no longer the case.

What is less often acknowledged is that emissions in the developing world are already too high. Chinese per capita emissions are 3.9 tonnes, while those in India are 1.8. The list of countries by per-capita greenhouse gas emissions on Wikipedia shows three states where per-capita emissions are below 750kg: Comoros, Kiribati, and Uruguay. Even the average level of emissions for sub-Saharan Africa is almost six times above the sustainable level for our current world population.

And our world population is growing.

All this raises serious questions of fairness. Obviously, people in North America and Europe have been overshooting our sustainable level of emissions for a long time. Do developing countries have a similar right to overshoot? How are their rights affected by what we now know about climate change? If they do have a right to emit more than 750kg per person, does that mean people in developed states have a corresponding duty to emit less than that? Even if we emitted nothing at all, we couldn’t provide enough space within the sustainable carbon budget for them to emit as much as we are now.

The only option is for everyone to decarbonize. The developed world needs to lead the way, in order to show that it can be done. The developing world needs to acknowledge that the right to develop does not trump other forms of legal and ethical obligation: both to those alive now and to future generations. People in both developed and developing states may also want to reconsider their assumptions about the desirability of population growth. Spending a few centuries with people voluntarily restricting their fertility below the natural rate of replacement could do a lot to limit the magnitude of the ecological challenges we will face as a species.

Regression to the mean

Emily Horn at Canada Place

All manner of diets, supplements, and vitamins compete for customers and adherents. Given that we are creatures of biochemistry, it is plausible that such chemicals will have effects on human health. Unfortunately, they do not act alone, but rather within a complex web of interactions: genes, environmental effects, physiological changes, etc, etc, etc. This makes it exceedingly difficult to isolate and prove the effect of any particular substance, especially given that the effect in different people, or the same person at different times, may differ.

The phenomenon of regression to the mean is especially confounding when it comes to individuals. We do not generally change our regimen of diet or supplements unless we perceive something to be wrong. We take vitamins when we feel ill, and analgesics when we have a headache. Given that both illness and headaches tend to rise to a peak and then taper off naturally, virtually any action we take in response will precede an improvement. It doesn’t matter if you spend a pile of money on remedies, spend your time praying, or simply sit still and wait for improvement. Regression to the mean is the product of basic statistical mathematics and isn’t caused by anything chemical or biological. It is tautological to say that things are normally like the mean and that, most of the time, situations far away from the mean will be replaced by those closer to it. While it is true that we can do things likely to shorten or lengthen the period of illness or discomfort, it is virtually impossible for an individual to know whether such an effect has occurred. Did taking those vitamins shorten or lengthen the cold? Did it have no effect? What about those glasses of red wine?

The only sensible course of action is to essentially disregard our own experiences, except in such cases where there is both a reasonably large body of evidence (ideally in the form of a large number of double-blind and controlled trials) and there is a plausible explanation for the method of action. Failure to employ such checks against hasty false reasoning leave us vulnerable to the pernicious human tendency to see causal relationships everywhere, without the scepticism that is critical in separating conjecture from an investigated hypothesis.

A query for any lurking physicists

I was having a conversation this afternoon about the Tunguska event: a huge explosion that occurred in Russia in 1908. I had always heard that it was caused by a meteor impact, though apparently some other explanations have also been considered. One that I just heard about is the possibility that it was caused by a collision between the earth and some antimatter.

Wikipedia suggests that this explanation isn’t credible, but it does leave me wondering: in the event of a collision between a particle of matter and a particle of antimatter, where the two particles are traveling at different velocities in opposite directions, how does the net momentum of the two translate in their annihilation? If antimatter did hit the earth, it would start to strike particles of matter in the upper atmosphere, the particle pairs would annihilate one another, and energy would be released. Would all that happen before the antimatter hit the ground? Presumably, it would strike its mass in antimatter before then. If so, what would the effect on the surface of the planet be?

Conservation of energy dictates that the kinetic energy in the faster particle would need to go somewhere. Presumably, it would manifest in the production of more energy during the annihilation event. As such, I suspect the antimatter clump would get blasted apart in the upper atmosphere and produce some kind of horrible shower of radiation, though nothing in the way of direct physical debris.

HVDC transmission for renewable energy

Power lines in Vancouver

One limitation of renewable sources of energy is that they are often best captured in places far from where energy is used: remote bays with large tides, desert areas with bright and constant sun, and windswept ridges. In these cases, losses associated with transmitting the power over standard alternating current (AC) power lines can lead to very significant losses.

This is where high voltage direct current (HVDC) transmission lines come in. Originally developed in the 1930s, HVDC technology is only really suited to long-range transmission. This is because of the static inverters that must be used to convert the energy to DC for transmission. These are expensive devices, both in terms of capital cost and energy losses. With contemporary HVDC technology, energy losses can be kept to about 3% per 1000km. This makes the connection of remote generating centres much more feasible.

HVDC has another advantage: it can be used as a link between AC systems that are out of sync with each other. This could be different national grids running on different frequencies; it could be different grids on the same frequency with different timing; finally, it could be the multiple unsynchronized AC currents produced by something like a field of wind turbines.

Building national and international HVDC backbones is probably necessary to achieve the full potential of renewable energy. Because of their ability to stem losses, they can play a vital role in load balancing. With truly comprehensive systems, wind power from the west coast of Vancouver Island could compensate when the sun in Arizona isn’t shining. Likewise, offshore turbines in Scotland could complement solar panels in Italy and hydroelectric dams in Norway. With some storage capacity and a sufficient diversity of sources, renewables could provide all the electricity we use – including quantities sufficient for electric vehicles, which could be charged at times when demand for other things is low.

With further technological improvements, the cost of static inverters can probably be reduced. So too, perhaps, the per-kilometre energy losses. All told, investing in research on such renewable-facilitating technologies seems a lot more sensible than gambling on the eventual existence of ‘clean’ coal.

A grand solar plan for the United States

Sign in Sophie’s Cosmic Cafe

The latest issue of Scientific American features an article about a ‘grand solar plan.’ The idea is to install massive solar arrays in the American southwest, then use high voltage direct current transmission lines to transfer the energy to populated areas. The intention is to build 3,000 gigawatts of generating capacity by 2050 – a quantity that would require 30,000 square miles of photovoltaic arrays. This would cost about $400 billion and produce 69% of all American electricity and 35% of all energy used in transport (including electric cars and plug-in hybrids). The plan depends upon storing pressurized air in caverns to balance electricity supply and demand. The authors anticipate that full implementation of the plan would cut American greenhouse gas emissions to 62% below 2005 levels by 2050, even assuming a 1% annual increase in total energy usage.

The authors stress that the plan requires only modest and incremental improvements in solar technology. For instance, the efficiency of solar cells must be increased from the present level of about 10% to 14%. The pressurized cavern approach must also be tested and developed, and a very extensive new system of long-distance transmission lines would need to be built. While the infrastructure requirements are daunting, the total cost anticipated by the authors seems manageable. As they stress, it would cost less per year than existing agricultural subsidy programs.

Depending on solar exclusively is probably not socially or economically optimal. The authors implicitly acknowledge this when they advocate combining the solar system with wind, biomass, and geothermal sources in order to generate 100% of American electricity needs and 90% of total energy needs by 2100. Whether this particular grand plan is technically, economically, and politically viable or not, such publications do play a useful role in establishing the parameters of the debate. Given the ongoing American election – and the potential for the next administration to strike out boldly along a new course – such ideas are especially worthy of examination and debate. It is well worth reading the entire article.

Three climatic binaries

Statue in North Vancouver

One way to think about the issue of mitigating climate change is to consider three binary variables:

  1. Cooperation
  2. Expense
  3. Disaster

By these I mean:

  1. Is there a perception that all major emitters are making a fair contribution to addressing the problem?
  2. Is mitigation to a sustainable level highly expensive?
  3. Are obvious and unambiguous climatic disasters occurring?

These interact in a few different ways.

It is possible to imagine moderate levels of spending (1-5% of GDP) provided the first condition is satisfied. Especially important is the perception within industry that competitors elsewhere aren’t being given an advantage. Reduced opposition from business is probably necessary for a non-ideological all-party consensus to emerge about the need to stabilize greenhouse concentrations through greatly reduced emissions and the enhancement of carbon sinks.

It is likewise possible to imagine medium to high levels of spending in response to obvious climatically induced disasters. For instance, if we were to see 1m or more of sea level rise over the span of decades, causing serious disruption in developed and developing states alike. Such disasters would make the issue of climatic damage much more immediate: not something that may befall our descendants, but something violently inflicted upon the world in the present day.

Of course, if things get too bad, the prospects for cooperation are liable to collapse. Governments facing threats to their immediate security are unlikely to prioritize greenhouse gas emission reductions or cooperation to that end with other states.

We must hope that political leaders and populations will have the foresight to make cooperation work. It may also be hoped that the cost of mitigation will prove to be relatively modest. The issue of disasters is more ambiguous. It is probably better to have a relatively minor disaster obviously attributable to climate change, if it induces serious action, than the alternative of serious consequences being delayed until it is too late to stop abrupt or runaway change.

KombiKraftwerk

Detractors of renewable energy have always stressed the problems brought on by the inconstancy of wind and sun. At the same time, renewable boosters have stressed how storage and amalgamation of energy from different places can overcome that limitation. Now, a project in Germany is aiming to prove that this can be done. KombiKraftwerk will link 36 different power plants: wind, solar, hydro, and biogas. The pilot project aims to provide just 1/10,000th of German power while proving the concept of a purely renewable grid. To begin with, the system should power about 12,000 homes. The intent is to show that Germany could be powered entirely using renewable energy. Another aspect of the plan is to eventually generate enough energy to power carbon sequestration systems for industries where emissions are inevitable.

Particularly when you include hydro in the mix, maintaining a supply of renewable power that matches the minute-by-minute demand becomes feasible. With any luck, this undertaking will successfully highlight the possibility of moving to a climate-neutral and sustainable system of electricity generation at national scales and above.