Carbon pricing and GHG stabilization

Montreal graffiti

Virtually everyone acknowledges that the best way to reduce greenhouse gas emissions is to create a price for their production that someone has to pay. It doesn’t matter, in theory, whether that is the final consumer (the person who buys the iPod manufactured and shipped across the world), the manufacturer, or the companies that produced the raw materials. Wherever in the chain the cost is imposed, it will be addressed through the economic system just like any other cost. When one factor of consumption rises in price, people generally switch to substitutes or cut back usage.

This all makes good sense for the transition from a world where carbon has no price at all and the atmosphere is treated as a greenhouse gas trash heap. What might become problematic is the economics of the situation when greenhouse gas emissions start to approach the point of stabilization. If we get 5 gigatonnes collectively, that means a global population of 11 billion will get about half a tonne of carbon each.

Consider two things: Right now, Canadian emissions per person are about 24.3 tonnes of CO2 equivalent. Cutting to about 0.5 is a major change. While it may be possible to cut a large amount for a low price (carbon taxes or permits at up to $150 a tonne have been discussed), it makes sense that people will be willing to pay ever-more to avoid each marginal decrease in their carbon budget. Moving from 24.3 tonnes to 20 might mean carrying out some efficiency improvements. Moving from 20 to 10 might require a re-jigging of the national energy and transportation infrastructures, carbon sequestration, and other techniques. Moving from 10 to 0.5 may inevitably require considerable personal sacrifice. It certainly rules out air travel.

The next factor to consider if the effect of economic inequality on all this. We can imagine many kinds of tax and trading systems. Some might be confined to individual states, and others to regions. It is possible that such a scheme would eventually be global. With a global scheme, however, you need to consider the willingness of the relatively affluent to pay thousands or tens of thousands of dollars to maintain elements of their carbon-intensive lifestyles. This could mean that people of lesser means get squeezed even more aggressively. It could also create an intractable problem of fraud. A global system that transfers thousands of dollars on the basis of largely unmeasured changes in lifestyle could be a very challenging thing to authenticate.

These kinds of problems lie in the relatively distant future. Moving to a national economy characterized by a meaningful carbon price is likely to take a decade. Moving to a world of integrated carbon trading may take even longer. All that admitted, the problems of increasing marginal value of carbon and the importance of economic inequality are elements that those pondering such pricing schemes should begin to contemplate.

Index of climate posts

Fruit bar

For the last while, my aim on this blog has been both to entertain readers and to provide some discussion of all important aspects of the climate change problem. To facilitate the latter aim, I have established an index of posts on major climate change issues. Registered users of my blog can help to update it. Alternatively, people can use comments here to suggest sections that should be added or other changes.

The index currently contains all posts since I arrived in Ottawa. I should soon expand it to cover the entire span for which this blog has existed.

Problems with fusion ITER means to solve

Building in Old Montreal

The fundamental problem with nuclear fusion as a mode of energy production is establishing a system that produces more power than it consumes. Heating and containing large volumes of tritium-deuterium plasma is an energy intensive business. As such, the sheer size of the planned International Thermonuclear Experimental Reactor is a big advantage. Just like it is easier to keep a huge cooler full of drinks cold than to keep a single can that way, a larger volume of plasma has less surface area relative to its total energy. As such, bigger reactors have a better chance of producing net power.

The other big problems that scientists and engineers anticipate are as follows:

  1. No previous reactor has sustained fusion for very long. The JT-60 reactors in Japan holds the record, at 24 seconds. Because ITER is meant to operate for between 7 and fifteen minutes, it will produce a higher volume of very hot hydrogen (the product of the tritium-deuterium fusion). That hydrogen could interfere with the fusing plasma. As such, it needs to be removed from the reactor somehow. ITER plans to use a carbon-coated structure called a diverter, at the bottom of the reactor, to try to do this. It is not known how problematic the helium will be, nor how effective the diverter will prove.
  2. Both the diverter and the blanket that surrounds the reactor will need to be able to resist temperatures of 100 million degrees centigrade. They will also need to be able to survive the presence of large amount of radiation. It is uncertain whether the planned beryllium coatings will be adequate to deal with the latter. Prior to ITER’s construction, there are plans to test the planned materials using a specially built particle accelerator at a new facility, probably to be built in Japan. THis test facility could cost about $2.6 billion – one quarter of the total planned cost of ITER itself.
  3. Probably the least significant problem is converting the heat energy from the fusion reaction into electrical power. This is presumably just a matter of putting pipes carrying a fluid into the blanket, then using the expansion of that fluid to drive turbines. While this should be a relatively basic change, it is worth noting that ITER will have no capacity to generate power, and will thus need to dissipate its planned output of about 500 megawatts by other means.

None of these issues undermine the case for building ITER. Indeed, they are the primary justification for building the facility. If we already knew how to deal with these problems, we could proceed directly to building DEMO: the planned electricity-generating demonstration plant that is intended to be ITER’s successor.

The foolishness of the International Space Station

Montreal courthouse

On Tuesday, the space shuttle launched once again on a mission to add another piece to the International Space Station (ISS). As I have said before, it is a needlessly dangerous, unjustifiably expensive, and rather pointless venture. The science could be equally well done by robots, without risking human lives, and without spending about $1.3 billion per launch (plus emitting all the greenhouse gasses from the solid rocket boosters and related activities).

More and more, the ISS looks like a hopeless boondoggle. The lifetime cost is being estimated at $130 billion, all to serve a self-fulfilling mandate: we need to put people into space to scientifically assess what happens when we put people into space. Furthermore, the window between the completion of the ISS in about 2012 and the potential abandonment of the station as soon as 2016 is quite narrow. Robert Park may have summed up the whole enterprise best when he remarked that:

“NASA must complete the ISS so it can be dropped into the ocean on schedule in finished form.”

Normally, I am a big supporter of science. I think funding the International Thermonuclear Experimental Reactor and Large Hadron Collider is wise; these machines will perform valuable scientific research. Likewise, I support the robotic work NASA does – especially when it comes to scientists looking down on Earth from orbit and providing valuable research and services. I support the James Webb telescope. I also support the idea that NASA should have some decent plans for dealing with an anticipated asteroid or comet impact. The ISS, by contrast, is a combination between technical fascination lacking strategic purpose and pointless subsidies to aerospace contractors.

Of course, the Bush plan to send people to Mars is an even worse idea with higher costs, more risk, and even less value.

Studies backing successive IPCC reports

While it is obvious that the 2007 Fourth Assessment Report (4AR) of the Intergovernmental Panel on Climate Change (IPCC) was going to be more comprehensive than the 2001 Third Assessment Report (TAR), I was surprised to see the extent and the breakdown:

Sector – Studies assessed in TAR – Studies assessed in 4AR
Cryosphere: 23 – 59
Hydrology and water resources: 23 – 49
Coastal processes and zones: 4 – 56
Aquatic biological systems: 14 – 117
Terrestrial biological systems: 46 – 178
Agriculture and forestry: 5 – 49
Human health: 5 – 51
Disasters and hazards: 3 – 18

Total: 95 – 577

While it is simplistic to equate the number of studies examined with the overall quality of the conclusions drawn, the large increase is certainly reflective of the amount of research being devoted to climate change issues, as well as the level of resources it has been deemed appropriate to spend examining that body of scientific work.

These figures come from Cynthia Rosenzweig, a research scientist at NASA and member of the IPCC’s second working group.

Materials science and transgenic animals

Oil spill analysis equipment

One of the most interesting ongoing developments in materials science involves the borrowing of biologically originated materials and processes. The development is old news for people who follow science news, but seems worth mentioning to others.

In the first instance, there is the copying of chemical tricks that exist in nature. People have speculated about copying the wall sticking abilities of gecko feet, for instance. By artificial producing structures similar to those on the feet, excellent non-chemical adhesives could be made. Gecko feet are sufficiently adhesive to hold several hundred times the weight of the animal. Furthermore, they can be attached and detached at will by altering the geometry of the setae that produce the adhesion using Van der Waals force.

In the second instance, people have been exploiting biological processes to produce existing things in more effective ways. A favourite way to do this is through pharming: where new genes are introduced into species in order to turn them into pharmaceutical factories. For instance, goats have been genetically engineered to produce an anti-clotting drug in their milk, which can then be extracted, purified, and used by humans. The drug, called ATryn, treats hereditary antithrombin deficiency: a condition that makes people especially vulnerable to deep-vein thrombosis. The principle benefits of using goats are financial, as described in The Economist:

Female goats are ideal transgenic “biofactories”, GTC claims, because they are cheap, easy to look after and can produce as much as a kilogram of human protein per year. All told, Dr Cox reckons the barn, feed, milking station and other investments required to make proteins using transgenic goats cost less than $10m—around 5% of the cost of a conventional protein-making facility. GTC estimates that it may be able to produce drugs for as little as $1-2 per gram, compared with around $150 using conventional methods.

Transgenic goats are also being used to produce spider silk on an industrial scale. That super-strong material could be used in everything from aircraft to bullet-proof vests. Different varieties of spider silk could be used to produce materials with varying strengths and elasticities.

While the former behaviour seems fairly unproblematic (we have been coping from nature for eons), the latter does raise some ethical issues. Certainly, it involves treating animals as a means to greater ends – though that is also an ancient activity. People have generally been more concerned about the dangers to people and the natural world from such techniques: will the drugs or materials produced be safe? Will the transgenic animals escape and breed with wild populations? These are reasonable concerns that extend well beyond the genetic or materials expertise possessed by the scientists in question.

The potential of such techniques is undeniably considerable. One can simply hope that a combination of regulation and good judgment will avoid nightmare situations of the kind described in Oryx and Crake. So far, our genetically modified creatures tend to be inferior to their natural competitors. According to Alan Weisman, virtually all of our crops and livestock would be eliminated by predation and competition in a few years, in the absence of human care and protection. It remains to be seen whether the same will be true of plants and animals that currently exist only in the imaginations of geneticists.

Cleaner coal

Coal is a witches’ brew of chemicals including hydrocarbons, sulphur, and other elements and molecules. Burning it is a dirty business, producing toxic and carcinogenic emissions including arsenic, selenium, cyanide, nitrous oxides, particulate matter, and volatile organic compounds. Coal plants also produce large amounts of carbon dioxide, thus contributing to climate change. That said, some coal plant designs can reduce both toxic and climatically relevant emissions to a considerable extent. Given concerns about energy security – coupled with the vast coal reserves in the United States, United Kingdom, China, and elsewhere – giving some serious thought to cleaner coal technology is sensible.

Integrated Gasification Combined Cycle (IGCC) plants are the best existing option for a number of reasons. Rather than burning coal directly, they use heat to convert it into syngas, which is then burned. Such plants can also produce syngas from heavy petroleum residues (think of the oil sands) or biomass. One advantage of this approach is that it simplifies the use of carbon capture and storage (CCS) technologies, which seek to bury carbon emissions in stable geological formations. This is because the carbon can be removed from the syngas prior to combustion, rather than having to be separated from hot flue gases before they go out the smokestack.

The problems with IGCC include a higher cost (perhaps $3,593 per kilowatt, compared with less than $1,290 for conventional coal) and lower reliability than simpler designs (this diagram reveals the complexity of IGCC systems). In the absence of effective carbon sequestration, such plants will also continue to emit very high levels of greenhouse gasses. If carbon pricing policies emerge in states that make extensive use of coal for energy, both of these problems may be reduced to some extent. In the first place, having to pay for carbon emissions would reduce the relative cost of lower-emissions technologies. In the second place, such pricing would induce the development and deployment of CCS.

One way or another, it will eventually be necessary to leave virtually all of the carbon that is currently trapped in coal in the ground, rather than letting it accumulate in the atmosphere. Whether that is done by leaving the coal itself underground or simply returning the carbon once the energy has been extracted is not necessarily a matter of huge environmental importance (though coal mining is a hazardous business that produces lots of contamination). That said, CCS remains a somewhat speculative and unproven technology. ‘Clean coal’ advocates will be on much stronger ground if a single electricity generating, economically viable, carbon sequestering power plant can be constructed.

Poison-absorbing plants

A recent article in Scientific American describes the use of transgenic plants to remove toxins from contaminated sites. The plants have genes for toxin and carcinogen metabolisis (for instance, using the enzyme cytochrome P450-3A) inserted into their DNA. The technique has been tested with plants intended to address trichloroethylene, chloroform, carbon tetrachloride, vinyl chloride, and benzene contamination. Such plants have also shown promise in removing remaining concentrations of the explosive RDX from soil in test ranges. At present, there is sometimes no choice but to scoop up huge amounts of contaminated soil and put it into landfills; plants that are able to seperate the toxins from the soil could promise to facilitate the process, as well as reduce costs.

The article is not entirely clear on whether the plants simply absorb the chemicals, becoming contaminated by them in turn, or whether they actually break them down. In the former case, they might be useful for concentrating air, water, and soil contaminants into plant matter than can then be disposed of as hazardous waste. In the latter case, they could perform remediation without the need for such careful treatment of their remains. Another question is how the plants would deal with combinations of chemicals, such as might be found in actual contaminated sites.

All told, it seems a promising potential use for biotechnology. The world is certainly well saturated with contaminated sites and having more cost effective means of reclaiming them could be a boon to both nature and human health. It remains to be seen whether these limited trials can be scaled up and made cost-effective for commercial or governmental use.

Soggy runways

While they only represent a relatively small fraction of total emissions now, greenhouse gasses from air travel are growing rapidly. That said, one largely unanticipated check against their long-term rise may exist, if the potential sea rise effects of the disintegration of the Greenland or West Antarctic ice sheets become manifest.

This clever Google Maps mashup will show you what I mean:

Adding 7m to global sea levels (consistent with the melting of all of Greenland, all of West Antarctica, or half of each) would definitely drown a lot of runways. The Tokyo and London airports seem likely to be high and dry, though the cities themselves would be far from it.