Blog Archive

Showing posts with label Andrew J. Weaver. Show all posts
Showing posts with label Andrew J. Weaver. Show all posts

Friday, September 6, 2013

Why trust climate models? It’s a matter of simple science

How climate scientists test, test again, and use their simulation tools.

by , ars tecnica, September 5 2013

Model simulation showing average ocean current velocities and sea surface temperatures near Japan. IPCC
Talk to someone who rejects the conclusions of climate science and you’ll likely hear some variation of the following: “That’s all based on models, and you can make a model say anything you want.” Often, they'll suggest the models don't even have a solid foundation of data to work with—garbage in, garbage out, as the old programming adage goes. But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?

Climate models are used to generate projections showing the consequences of various courses of action, so they are relevant to discussions about public policy. Of course, being relevant to public policy also makes a thing vulnerable to the indiscriminate cannons on the foul battlefield of politics.

Skepticism is certainly not an unreasonable response when first exposed to the concept of a climate model. But skepticism means examining the evidence before making up one’s mind. If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.

It’s a model, just not the fierce kind

Climate models are, at heart, giant bundles of equations—mathematical representations of everything we’ve learned about the climate system. Equations for the physics of absorbing energy from the Sun’s radiation. Equations for atmospheric and oceanic circulation. Equations for chemical cycles. Equations for the growth of vegetation. Some of these equations are simple physical laws, but some are empirical approximations of processes that occur at a scale too small to be simulated directly.

Cloud droplets, for example, might be a couple hundredths of a millimeter in diameter, while the smallest grid cells that are considered in a model may be more like a couple hundred kilometers across. Instead of trying to model individual droplets, scientists instead approximate their bulk behavior within each grid cell. These approximations are called “parameterizations.”

Connect all those equations together and the model operates like a virtual, rudimentary Earth. So long as the models behave realistically, they allow scientists to test hypotheses as well as make predictions testable by new observations.

Some components of the climate system are connected in a fairly direct manner, but some processes are too complicated to think through intuitively, and climate models can help us explore the complexity. So it's possible that shrinking sea ice in the Arctic could increase snowfall over Siberia, pushing the jet stream southward, creating summer high pressures in Europe that allow India’s monsoon rains to linger, and on it goes… It's hard to examine those connections in the real world, but it's much easier to see how things play out in a climate model. Twiddle some knobs, run the model. Twiddle again, see what changes. You get to design your own experiment—a rare luxury in some of the Earth sciences.
Enlarge  Diagram of software architecture for the Community Earth System Model. Coupled models use interacting components simulating different parts of the climate system. Bubble size represents the number of lines of code in each component of this particular model. Kaitlin Alexander, Steve Easterbrook
In order to gain useful insights, we need climate models that behave realistically. Climate modelers are always working to develop an ever more faithful representation of the planet’s climate system. At every step along the way, the models are compared to as much real-world data as possible. They’re never perfect, but these comparisons give us a sense for what the model can do well and where it veers off track. That knowledge guides the use of the model, in that it tells us which results are robust and which are too uncertain to be relied upon.

Andrew Weaver, a researcher at the University of Victoria, uses climate models to study many aspects of the climate system and anthropogenic climate change. Weaver described the model evaluation process as including three general phases. First, you see how the model simulates a stable climate with characteristics like the modern day. “You basically take a very long run, a so-called ‘control run,'” Weaver told Ars. “You just do perpetual present-day type conditions. And you look at the statistics of the system and say, 'Does this model give me a good representation of El Niño? Does it give me a good representation of Arctic Oscillation? Do I see seasonal cycles in here? Do trees grow where they should grow? Is the carbon cycle balanced?' ”

Next, the model is run in changing conditions, simulating the last couple centuries using our best estimates of the climate “forcings” (or drivers of change) at work over that time period. Those forcings include solar activity, volcanic eruptions, changing greenhouse gas concentrations, and human modifications of the landscape. “What has happened, of course, is that people have cut down trees and created pasture, so you actually have to artificially come in and cut down trees and turn it into pasture, and you have to account for this human effect on the climate system,” Weaver said.

The results are compared to observations of things like changing global temperatures, local temperatures, and precipitation patterns. Did the model capture the big picture? How about the fine details? Which fine details did it simulate poorly—and why might that be?
Enlarge Comparison of observed (top) and simulated (bottom) average annual precipitation 
between 1980 and 1999.IPCC
At this point, the model is set loose on interesting climatic periods in the past. Here, the observations are fuzzier. Proxy records of climate, like those derived from ice cores and ocean sediment cores, track the big-picture changes well but can’t provide the same level of local detail we have for the past century. Still, you can see if the model captures the unique characteristics of that period and whatever regional patterns we’ve been able to identify.

This is what models go through before researchers start using them to investigate questions or provide estimates for summary reports like those produced for the Intergovernmental Panel on Climate Change (IPCC).

Coding the climate

Some voices in the public debate over climate science have been critical of the fact that there is no standardized, independent testing protocol for climate models like those used for commercial and engineering applications. Climate scientists have responded that climate models are so different as to make such an “independent verification and validation” process incompatible.

Steve Easterbrook, a professor of computer science at the University of Toronto, has been studying climate models for several years. “I'd done a lot of research in the past studying the development of commercial and open source software systems, including four years with NASA studying the verification and validation processes used on their spacecraft flight control software,” he told Ars.

When Easterbrook started looking into the processes followed by climate modeling groups, he was surprised by what he found. 

“I expected to see a messy process, dominated by quick fixes and muddling through, as that's the typical practice in much small-scale scientific software. What I found instead was a community that takes very seriously the importance of rigorous testing, and which is already using most of the tools a modern software development company would use (version control, automated testing, bug tracking systems, a planned release cycle, etc.).”

“I was blown away by the testing process that every proposed change to the model has to go through,” Easterbrook wrote. 

“Basically, each change is set up like a scientific experiment, with a hypothesis describing the expected improvement in the simulation results. The old and new versions of the code are then treated as the two experimental conditions. They are run on the same simulations, and the results are compared in detail to see if the hypothesis was correct. Only after convincing each other that the change really does offer an improvement is it accepted into the model baseline.”

Easterbrook spent two months at the UK Met Office Hadley Centre, observing and describing the operations of the climate modeling group (which is about 200 scientists strong). He looked at everything from code efficiency to debugging to the development process. He couldn’t find much to critique, concluding that “it is hard to identify potential for radical improvements in the efficiency of what is a ‘grand challenge’ science and software engineering problem.”

Easterbrook has argued against the idea that an independent verification and validation protocol could usefully be applied to climate models. One problem he sees is that climate models are living scientific tools that are constantly evolving rather than pieces of software built to achieve a certain goal. There is, for the most part, no final product to ship out the door. There's no absolute standard to compare it against either.

To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well. Whether you add it or not then depends on what you're trying to achieve. Is the primary test of the model to match certain observations or to provide the most realistic possible representation of the processes that drive the climate system? And which observations are the most important to match? Patterns of cloud cover? Sea surface temperature?

As more features have been added, current models have become much more sophisticated than models were 20 years ago, so the standards by which they're judged have tightened. It's entirely possible that earlier models would have failed testing that today’s models would pass. But that doesn't mean that the older models were useless; they may have just gotten fewer physical processes right or had a much lower resolution.

If, as Easterbrook argues, the models are essentially manifestations of the scientific community’s best available knowledge, there’s already a process in place to evaluate them—science. Experiments are replicated by other groups using their own models. Individual peer-reviewed studies are considered in the context of the accumulated knowledge of climate science. Climate models are not so different from other methods of inquiry in that a new scientific method must be invented especially for them.

Firing up the wayback machine

The individual researchers who are part of these modeling efforts work on very different aspects of the model, and each requires a slightly different way of doing things. Bette Otto-Bliesner works on the Community Earth System Model at the National Center for Atmospheric Research (which recently opened a new supercomputing center). Her research focuses on using climate models to understand past climate, working out the mechanisms that drove the events recorded in things like ocean sediment cores. “My research goal is to understand the uncertainties in the climate and Earth system responses to forcings using past time periods to provide more confidence in our projections of future change,” Otto-Bliesner told Ars.

Proxy records of climate from cores of ice or ocean sediments are limited to providing information about the geographic area from which they were collected, so climate models can help fill in the rest of the global picture. A model simulation of actual events—say, an immense ice-dammed lake draining into the North Atlantic and disrupting ocean circulation—can be compared to a network of proxy records to see if the simulated climate impact is consistent with what the proxies show. If the match is poor, then perhaps the observed change in climate was caused by something else.

Otto-Bliesner’s group is working to take this comparison one step further by having the model simulate the processes that create the proxy records as well. Instead of comparing the model to the interpretation of the proxy record data (such as temperature changes inferred from shifting isotope ratios), that data could be compared directly to a virtual version of the isotopes themselves, one produced by the model.

These paleoclimate simulations can serve to evaluate a model as well. The model can be run for interesting time periods, like the end of the last ice age, to see how well it simulates changes in temperature and ocean circulation. “We want to keep our paleo-simulations [separate] as an independent test of our models to changed forcings, so they are not included in the development process,” Otto-Bliesner told Ars. Since the climate was very different at times in the past, these tests help illuminate a model’s strengths and weaknesses.
Enlarge / Snapshot from an experiment simulating the last 22,000 years. In the graph at the bottom, 
the dark line represents simulated surface temperature over Greenland and the lighter line shows 

Setting the bar

Gavin Schmidt, a climate researcher at the NASA Goddard Institute for Space Studies, is more involved in the development itself. “I explore issues like how one evaluates [climate] models, how comparisons between models and observations should be done, and how one builds credibility in predictions,” he told Ars.

Improving the model means better simulating physical processes, Schmidt says, which doesn’t necessarily improve the large-scale match with every set of observations. “There are always observational datasets that show a mismatch to the model—either regionally or in time,” Schmidt explained. “Some of these mismatches are persistent (i.e., we haven't found any way to alleviate them); some are related to issues/parameters that we have more of a handle on, and so they can be reduced in the next iteration. One problem is that in fixing one problem one often makes something else worse. Therefore, it is a balancing act that each model center does a little differently.”

One surprisingly common misconception about climate models is that they’re just exercises in curve-fitting. The global average temperature record is fed into the model, which matches that trend and spits out a simulation just like it. In this (mistaken) view, having a model that compares well with reality is a necessary outcome of the process. This doesn’t demonstrate that climate models can be trusted to usefully project future trends, but this line of thinking is mistaken for several reasons.

There’s obviously more to a climate model than a graph of global average temperature. Some parameterizations—those stand-ins for processes that occur at scales finer than a grid cell—are tuned to match observations. After all, they are attempts to describe a process in terms of its large-scale results. But successful parameterizations aren’t used as a gauge of how well the model is reproducing reality. “Obviously, since these factors are tuned for, they don't count as a model success. However, the model evaluations span a much wider and deeper set of observations, and when you do historical or paleoclimate simulations, none of the data you are interested in has been tuned for,” Schmidt told Ars.

Enlarge / Example output showing average annual surface temperature from the NASA GISS ModelE.

Why so cirrus?

Many of the most important parameterizations involve the complex behavior of clouds. Representing these processes effectively in a climate model is a key challenge, not just because they happen at scales far smaller than grid cells but because clouds play such a big role in the climate system. Storm patterns affect regional climate in many ways, and the way clouds respond to a warming climate could either enhance or partially offset the temperature change.

Tony Del Genio, another researcher at the NASA Goddard Institute for Space Studies, works on improving the way models simulate clouds. “The real world is more complicated than any model of it,” Del Genio told Ars. “Given the limited computing and human resources, we have to prioritize. We try to anticipate which processes that are missing from the model might be most important to include in the next-generation version (not everything that happens in the atmosphere is important to climate).”

“Once we identify a physical process we want to add or improve, we start with whatever fundamental understanding of the process that we have, and then we try to develop a way to approximately represent it in terms of the variables in the model (temperature, humidity, etc.) and write computer code to represent that,” Del Genio said. “We then run the model with the new process in it and we look for two things: whether the process as we have portrayed it behaves the way it does in the real world and whether or not it makes some aspect of the model's climate more realistic. We do this by comparison to observations, either field experiment, satellite, or surface remote sensing observations, or by comparing to fine-scale models that simulate individual cloud systems.”

Del Genio says that while modelers used to focus more on whether the model simulations looked like the average conditions for an area, they’ve learned that other types of behavior—like large-scale weather patterns— are better indicators of the usefulness of a model for projecting into the future. “A good example of that is something called the Madden-Julian Oscillation (MJO for short), which most people in the US have probably never heard of,” Del Genio said. “The MJO causes alternating periods of very rainy and then mostly clear weather over periods of a month or so over the Indian Ocean and in southeast Asia and is very important to people in that part of the world. It also affects winter rainfall in the western US. It turns out that whether a model simulates the MJO or not depends strongly on how one represents the clouds that develop into thunderstorms in the model, so we observe it closely and try hard to get it right.”

Del Genio also gets to apply his knowledge and skills to other planets. Using the extremely limited information we have about the atmospheres of other planets, models can help work out how they behave. “For other planets, we are still asking basic questions about how a given planet's atmosphere works—how fast do its winds blow and why, does it have storms like those on Earth, are those storms made of water clouds like on Earth, and why one planet differs from another,” Del Genio said.

Ice, on the rocks

While Tony Del Genio has his head in the clouds and outward into the Solar System beyond, Penn State glaciologist Richard Alley stands on ice sheets miles thick, thinking about what’s going on beneath his feet. Instead of trying to model the whole climate system, he’s focused on the behavior of valley glaciers and ice sheets. “An ice sheet is a two-mile-thick, one-continent-wide pile of old snow squeezed to ice under the weight of more snow and spreading under its own weight,” Alley told Ars. “The impetus for flow is essentially the excess pressure inside the ice compared to outside, and it's usually quantified as being the product of the ice density, gravitational acceleration, thickness of ice above the point you're talking about, and surface slope.”

Ice sheet models use the equations that describe that flow of ice to simulate how the ice sheet changes over time in response to outside factors. The size of an ice sheet, like a bank account, is determined by the balance of gains and losses. Increase the amount of melting going on at the edges of the ice sheet and it will shrink. Increase the amount of snowfall over the cold, central region of the ice sheet and it will grow. Lubricate the base of the ice sheet with liquid water, and it may flow faster to the sea, causing an overall loss of ice.

These models are complex and detailed enough that they’re usually run on their own rather than within a climate model that is already busy trying to handle the rest of the planet. Depending on the experiment being run with the model, climate conditions simulated by another model might be imported or a simpler, pre-determined scenario might suffice.

Like global climate models, ice sheet models can also be evaluated against what we know about the past. “Does the model put ice in places that ice was known to have been and not in places where ice was absent?” Alley said. “Are the fluctuations of ice in response to orbital forcing in the past configuration consistent with the reconstructed changes in sea level based on coastal indicators or isotopic composition of the ocean as inferred from ratios in particular shells in sediment cores?”

All this work eventually contributes to our understanding of how the ice sheet is likely to behave in the future. “For these projections to be reliable, we want to see similar behavior in a range of models, from simple to complex, run by different groups, and to understand physically why the models are producing the results they do; we're especially confident if the paleoclimatic record shows a similar response to similar forcings in the past, and if we see the projected behavior emerging now in response to the recent human and natural forcings,” Alley said. “With all four—physical understanding, agreement in a range of models, observed in paleo and emerging now—we're pretty confident; with fewer, less so.”

Along with providing better estimates of how ice sheets will contribute to sea level rise, ice sheet models also help generate research questions. By revealing the biggest sources of uncertainty, models can point to the types of measurements and research that will yield the greatest bang for the buck.

Enlarge  Simulation of ice sheet elevation at the peak of the last ice age using the Parallel Ice 
Sheet Model and the ECHAM5 climate model. Florian Ziemen, Christian Rodehacke, 
Uwe Mikolajewicz (Max Planck Institute for Meteorology)

Community service

There’s another way in which these climate models are probed—by comparing them with each other. Since there are so many groups of researchers independently building their own models to approximate the climate system, the similarities and differences of their simulations can be illuminating.

Observational data is necessarily limited, but every single thing in a model can be examined. That makes model-to-model comparison more of an apples-to-apples affair when they’re run using the same inputs (like greenhouse gas emissions scenarios). The cause of a poor match between some portion of a model and reality isn’t always obvious, whereas it could jump out when the results are compared to those produced by another model.

There are many such “model intercomparison projects,” including ones focused on atmospheric models, paleoclimate simulations, or geoengineering research. The largest is the Coupled Model Intercomparison Project (CMIP), which has become an important resource for the Intergovernmental Panel on Climate Change reports. What started in 1995 as a simple project blossomed into an enormously useful organizing force for an abundance of research.

Each phase of the project includes a set of experiments chosen by the modeling community. In the latest round, for example, the models have been investigating short-term, decadal predictions, the way clouds change in a warming climate, and a new technique for making comparisons between model results and atmospheric data from satellites.

Apart from helping research groups improve their models, CMIP also makes climate simulations from all the models involved accessible to other researchers. Interested in the future behavior of Himalayan glaciers? Or the economic impact of changes in precipitation over the US? Simulations from a variety of models for a range of emissions scenarios are conveniently available in one place and in standardized formats. In a way, that coordination also increases the value of the studies that use this data. If three different studies on species migration caused by climate change each used arbitrarily different scenarios for the future, comparing their results could be more difficult.

The most visible product of CMIP has probably been its contribution to the IPCC reports. When the reports show model ensembles (many simulations averaged together), they’re pulling from the CMIP collection. Rather than choosing a preferred model, the IPCC essentially works from the average of all of them, while the range of their results is used as an indicator of uncertainty. In this way, the work of independent modeling groups around the world is aggregated to help inform policy makers.

Enlarge  Average (red line) of 58 model simulations (yellow lines) of global average temperature 
compared to observations (black line).IPCC

No crystal ball—but no magic 8 ball, either

If you only tune in to public arguments about climate change or read about the latest study that uses climate models, it’s easy to lose sight of the truly extraordinary achievement those models represent. As Andrew Weaver told Ars, “What is so remarkable about these climate models is that it really shows how much we know about the physics and chemistry of the atmosphere, because they’re ultimately driven by one thing—that is, the Sun. So you start with these equations, and you start these equations with a world that has no moisture in the atmosphere that just has seeds on land but has no trees anywhere, that has an ocean that has a constant temperature and a constant amount of salt in it, and it has no sea ice, and all you do is turn it on. [Flick on] the Sun, and you see this model predict a system that looks so much like the real world. It predicts storm tracks where they should be, it predicts ocean circulation where it should be, it grows trees where it should, it grows a carbon cycle—it really is remarkable.”

But climate scientists know models are just scientific tools—nothing more. In studying the practices of climate modeling groups, Steve Easterbrook saw this firsthand. “One of the most common uses of the models is to look for surprises—places where the model does something unexpected, primarily as a way of probing the boundaries of what we know and what we can simulate," he said. "The models are perfectly suited for this. They get the basic physical processes right but often throw up surprises in the complex interactions between different parts of the Earth system. It is in these areas where the scientific knowledge is weakest. So the models help guide the scientific process."

“So I have tremendous respect for what the models are able to do (actually, I'd say it's mind-blowing), but that's a long way from saying that any one model can give accurate forecasts of climate change in the future on any timescale," Easterbrook continued. “I'm particularly impressed by how much this problem is actively acknowledged and discussed in the climate modeling community and how cautious the modelers are in working to avoid any possible over-interpretation of model results.”

“One of the biggest sources of confidence in the models is that they give results that are broadly consistent with one another (despite some very different scientific choices in different models), and they give results that are consistent with the available data and current theory,” Easterbrook said. And while they're being developed, the rest of the broad field of climate science is hard at work gathering more data and developing our theoretical understanding of the climate system—information that will inform the next generation of models.

The guiding principle in modeling of any kind was summarized by George E.P. Box when he wrote that “all models are wrong, but some are useful.” Climate scientists work hard to ensure that their models are useful, whether to understand what happened in the past or what could happen in the future.

Every projection showing multiple scenarios for future greenhouse gas emissions illustrates the present moment as a constantly shifting crossroads—the point where all future paths diverge, with their course determined using climate models. Armed with that map, we get to decide which of the possible paths we are going to make reality. The more we understand about the climate system and the more realistically climate models behave, the more detailed that map becomes. There’s always more to work out, but we’ve already advanced well past the stage where we need to ask for directions.

http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/

Thursday, January 10, 2013

"Significant contribution to climate warming from the permafrost carbon feedback," by Andrew H. MacDougall, Christopher A. Avis and Andrew J. Weaver, Nature Geoscience, 5 (2012); doi:10.1038/ngeo1573

Nature Geoscience, 5(10) (2012) 719-721; doi:10.1038/ngeo1573,
Published online
 




Significant contribution to climate warming from the permafrost carbon feedback



Abstract

Permafrost soils contain an estimated 1,700Pg of carbon, almost twice the present atmospheric carbon pool1. As permafrost soils thaw owing to climate warming, respiration of organic matter within these soils will transfer carbon to the atmosphere, potentially leading to a positive feedback2. Models in which the carbon cycle is uncoupled from the atmosphere, together with one-dimensional models, suggest that permafrost soils could release 7–138Pg carbon by 2100 (refs 34). Here, we use a coupled global climate model to quantify the magnitude of the warming generated by the feedback between permafrost carbon release and climate. According to our simulations, permafrost soils will release between 68 and 508Pg carbon by 2100. We show that the additional surface warming generated by the feedback between permafrost carbon and climate is independent of the pathway of anthropogenic emissions followed in the twenty-first century. We estimate that this feedback could result in an additional warming of 0.13–1.69°C by 2300. We further show that the upper bound for the strength of the feedback is reached under the less intensive emissions pathways. We suggest that permafrost carbon release could lead to significant warming, even under less intensive emissions trajectories.

http://www.nature.com/ngeo/journal/v5/n10/abs/ngeo1573.html

Sunday, November 11, 2012

"Stability of the Atlantic meridional overturning circulation: A model intercomparison," by Andrew Weaver et al., GRL 39 (2012); doi:10.1029/2012GL053763


Geophysical Research Letters, 39 (2012) L20709; doi:10.1029/2012GL053763
Stability of the Atlantic meridional overturning circulation: A model intercomparison
Key Points
  • All climate models project very similar behavior during the 21st century
  • No model exhibits an abrupt change of the MOC
  • More than 1/2 of the models are in the bistable regime ==> not overly stable
Andrew J. Weaver (School of Earth and Ocean Sciences, University of Victoria, Victoria, BC, Canada), Jan Sedláček (Institute for Atmospheric and Climate Science, ETH, Zurich, Switzerland), Michael Eby (School of Earth and Ocean Sciences, University of Victoria, Victoria, BC, Canada), Kaitlin Alexander (School of Earth and Ocean Sciences, University of Victoria, Victoria, BC, Canada), Elisabeth Crespin (Georges Lemaître Centre for Earth and Climate Research, Earth and Life Institute, Université Catholique de Louvain, Louvain-La-Neuve, Belgium), Thierry Fichefet (Georges Lemaître Centre for Earth and Climate Research, Earth and Life Institute, Université Catholique de Louvain, Louvain-La-Neuve, Belgium), Gwenaëlle Philippon-Berthier (Georges Lemaître Centre for Earth and Climate Research, Earth and Life Institute, Université Catholique de Louvain, Louvain-La-Neuve, Belgium), Fortunat Joos (Climate and Environmental Physics, Physics Institute, University of Bern, Bern, Switzerland, Oeschger Centre for Climate Change Research, University of Bern, Bern, Switzerland), Michio Kawamiya (Research Institute for Global Change, JAMSTEC, Yokohama, Japan), Katsumi Matsumoto (Department of Earth Sciences, University of Minnesota, Twin Cities, Minneapolis, MN, USA), Marco Steinacher (Climate and Environmental Physics, Physics Institute, and the Oeschger Centre for Climate Change Research, University of Bern, Bern, Switzerland), Kaoru Tachiiri (Research Institute for Global Change, JAMSTEC, Yokohama, Japan), Kathy Tokos (Department of Earth Sciences, University of Minnesota, Twin Cities, Minneapolis, MN, USA), Masakazu Yoshimori (Atmosphere and Ocean Research Institute, University of Tokyo, Tokyo, Japan), Kirsten Zickfeld (Department of Geography, Simon Fraser University, Vancouver, BC, Canada)

Abstract 
The evolution of the Atlantic Meridional Overturning Circulation (MOC) in 30 models of varying complexity is examined under four distinct Representative Concentration Pathways. The models include 25 Atmosphere-Ocean General Circulation Models (AOGCMs) or Earth System Models (ESMs) that submitted simulations in support of the 5th phase of the Coupled Model Intercomparison Project (CMIP5) and 5 Earth System Models of Intermediate Complexity (EMICs). While none of the models incorporated the additional effects of ice sheet melting, they all projected very similar behaviour during the 21st century. Over this period the strength of MOC reduced by a best estimate of 22% (18%–25%; 5%–95% confidence limits) for RCP2.6, 26% (23%–30%) for RCP4.5, 29% (23%–35%) for RCP6.0 and 40% (36%–44%) for RCP8.5. Two of the models eventually realized a slow shutdown of the MOC under RCP8.5, although no model exhibited an abrupt change of the MOC. Through analysis of the freshwater flux across 30°–32°S into the Atlantic, it was found that 40% of the CMIP5 models were in a bistable regime of the MOC for the duration of their RCP integrations. The results support previous assessments that it is very unlikely that the MOC will undergo an abrupt change to an off state as a consequence of global warming.
Received 4 September 2012; accepted 24 September 2012; published 24 October 2012.
Weaver, A. J., et al. (2012), Stability of the Atlantic meridional overturning circulation: A model intercomparisonGeophys. Res. Lett.39, L20709, doi:10.1029/2012GL053763.

Thursday, September 20, 2012

Prof. Andrew Weaver to run for Greens in Oak Bay-Gordon Head, British Columbia


Climate scientist Andrew Weaver to run for B.C. Greens in Oak Bay-Gordon Head

 

 

Climate scientist Andrew Weaver has announced he'll run for the B.C. Green Party in Oak Bay-Gordon Head.
 

Climate scientist Andrew Weaver has announced he'll run for the B.C. Green Party in Oak Bay-Gordon Head. Photograph by: Bruce Stotesbury, Times Colonist


The Green Party of B.C. has landed what it expects to be the first of several high-profile candidates in the upcoming provincial election, and he’s a local.

Andrew Weaver is most famous as a climate scientist. In 2007, he shared the Nobel Peace Prize alongside Al Gore and fellow members of an Intergovernmental Panel on Climate Change. The University of Victoria professor is considered one of the leading, and most-outspoken, experts on global warming.

He was also born and raised in Victoria, went to high school in Oak Bay and graduated from UVic. He lives in Gordon Head with his family. This morning, Weaver announced he’s jumping in the political ring and will run for the Green Party in Oak Bay Gordon Head in 2013.

“I’ve been following what’s happening in B.C. and I’m concerned about an NDP landslide -- that the environment will fall to the wayside,” Weaver said Thursday morning. “But it’s not just about the environment -- it’s about evidence-based decision-making. I’ve spent 25 years informing policy as a scientist and now I’m hoping to help shape it.”

Weaver said he was inspired to join politics in part by Elizabeth May, Saanich-Gulf Islands member of Parliament and the leader of the Green Party of Canada.

“I could see how one person could make a difference,” he said. “I’ve never been a member of a political party in my life until last week when I joined the Green Party of B.C. My strongest asset is I’m not here to make a career out of politics, I’m here to make a difference.”

In a press release, Green Party of B.C. Leader Jane Sterk said: “I expect Andrew to be the first of a number of high-profile B.C. Green candidates... Greens want intelligent, thoughtful, passionate people who will stand up for their constituents and for change. Quite frankly, I think his candidacy is a game-changer for the 2013 election.”

The Oak Bay-Gordon Head riding is held by B.C. Liberal Ida Chong, who was named aboriginal relations minister in a recent cabinet shuffle. Education advocate Jessica Van der Veen will run for the B.C. New Democratic Party.

http://www.timescolonist.com/technology/Climate+scientist+Andrew+Weaver+Greens+Gordon+Head/7272850/story.html

Monday, September 17, 2012

Thawing permafrost speeding up global warming


Permafrost thaw will speed up global warming, study says

September 11, 2012





A polar bear wanders along the Hudson Bay. New research suggets that permafrost soils in Canada's Arctic are melting at a rate that will significantly speed up global warming.A polar bear wanders along the Hudson Bay. New research suggets that permafrost soils in Canada's Arctic are melting at a rate that will significantly speed up global warming. (iStock)


Permafrost soils in Canada's Arctic are melting at a rate that will significantly speed up global warming, according to new research from the University of Victoria.
The study, published this week in Nature Geoscience, predicts that the thawing permafrost will release between 68 billion and 508 billion tonnes of carbon into the atmosphere by the year 2100.
As a result of those carbon emissions, researchers say the Earth's temperature will rise by more than 0.5 C by the end of the century.
Although seemingly insignificant, that amount is in addition to the two degrees the Earth's temperature is expected to rise because of global warming from industrial sources.
Andrew Weaver, a climate scientist at the University of Victoria and one of the study's authors, warns that once the planet warms by more than two degrees, the impact could be dire.
"Warming much beyond that puts an unacceptably high probability that we're committed to Greenland melting," Weaver said in an interview. "Rather large percentages of existing species become committed to extinction."
Experts estimate that permafrost soils contain 1,700 billion tonnes of carbon, about four times the amount emitted since the industrial revolution.
As the permafrost thaws because of warmer climates, trapped carbon is released into the atmosphere. This interplay between the increasing carbon emissions and increasing temperatures is a process called feedback.
Now that this process has started, not much can be done to stop it, said Weaver.
"It's like a slow, creeping cancer," he said. "Once you've set it in motion, it continues on and on and on and gains momentum."
The Arctic is undergoing a number of changes because of global warming. Last month, the amount of sea ice cover in the region melted to a record low, with experts predicting that the Arctic Ocean could become ice-free by 2030.

Tuesday, July 10, 2012

Guardian: Scientists march in white coats through Ottawa to protest Harper govt. cuts to vital science research and laws


Canada's PM Stephen Harper faces revolt by scientists

Scientists to march through Ottawa in white lab coats in protest at cuts to research and environmental damage
Canada's prime minister Stephen Harper: his government is accused of jeopardising Canada’s scientific reputation. Photograph: Todd Korol/Reuters
Canada's prime minister, Stephen Harper, faces a widening revolt by the country's leading scientists against sweeping cuts to government research labs and broadly pro-industry policies.
The scientists plan to march through Ottawa in white lab coats on Tuesday in the second big protest in a month against the Harper government's science and environmental agenda.
Harper is accused of pushing through a slew of policies weakening or abolishing environmental protections – with an aim of expanding development of natural resources such as the Alberta tar sands.
His government is also accused of jeopardising Canada's scientific reputation by shutting down the Experimental Lakes Area (ELA), a research station that produced critical evidence to help stop acid rain.
"In my view there are a lot of attempts in this country, and other countries too, to push through resource-based economies," said Prof John Smol, a freshwater lake biologist at Queen's University in Kingston, Ontario. "People working at ELA are constantly finding reasons why you can't just put a pipeline here, or an industry there, because there are going to be environmental costs."
Andrew Weaver, a climate scientist at the University of Victoria in British Columbia, was even more pointed. "It's not about saving money. It's about imposing ideology," he said. "What's happening here is that the government has an ideological agenda to develop the Canadian economy based on the extraction of oil out of the Alberta tar sands as quickly as possible and sell it as fast as it can, come hell and high water, and eliminate any barriers that stand in their way."
However, a spokeswoman for Gary Goodyear, the minister of state for science and technology, said the government remained committed to funding science. "Our government has made historic investments in science, technology and research to create jobs, grow our economy, and improve the quality of life for Canadians," she said.
But Canadian government officials also indirectly confirmed scientists' charges that Harper was far more interested in funding research with direct industry applications, than in funding pure science or environmental research.
"As a country we have been lagging behind our peer nations on applied research and commercialisation and our government is taking steps to correct that," the official said.
The official provided a list of new projects supported by the government. Among the largest was $105m for marketing forest products.
The showdown between the government and scientists was set late last month by the passage of a budget bill that weakened or abolished scores of environmental laws.
The government claims the cuts are intended to shift more resources towards monitoring development of the Alberta tar sands, the core of Harper's economic strategy.
Critics say the changes gut the country's strongest environmental law, the Canadian Fisheries Act, by easing earlier requirements on mining and other industries to protect fish habitat.
In addition, the C-38 budget bill cut dozens of jobs for government scientists, scrapped research projects, and pollution control programmes. It abolished the unit in charge of monitoring emissions from power plants, furnaces, boilers and other sources, for a net saving of about $600,000.
It cut funding entirely for two-well established bodies: the National Round Table on the Environment and the Economy, an advisory panel, and the Canadian Foundation for Climate and Atmospheric Science, which awards research grants. It also cut other research grant programmes.
The Harper government has clashed regularly with environmental groups over its strategy of developing the tar sands and shipping the oil to America and China.
Earlier this year, the natural resources minister, Joe Oliver, accused foreign radicals and "jet-setting celebrities" of trying to hijack the country, by opposing development of the tar sands.
The government has also directed the tax authorities to investigate the funding of environmental groups.
There were protests, too, when government scientists were banned from speaking to media without an official "minder," and when news of the cost-cutting proposals first trickled out.
More than 500 groups took their websites down for 24 hours last month in protest at the budget cuts, which they claim were an excuse to weaken environmental protections.
But the cuts that seem to have galvanised the protests on Tuesday was the government's decision to shut down the Experimental Lakes Area in March 2013.
"It's a culmination of all of the cuts to government science and environment," said Diane Orihel, a PhD candidate at the University of Alberta, leading the campaign to save the labs. "The ELA is one small little morsel in a much broader problem." But she added: "We are starting to see momentum."
Since the decision first trickled out – as a government leak – the Harper government has faced widening criticism in Canadian media.
Scientists say the closure, due in March 2013, would rob researchers of a rare chance to conduct science on a real-life scale – not just in a laboratory flask, said Smol.
Over the years, it has provided critical evidence on the causes of acid rain, and the effects on fish and their habitats of dumping fertilisers, detergents, or mercury.
"Any water quality problem we have on the planet, the research started out there," Smol said. "I think we need that information to get solid policy to deal with our environmental problems."
The government argues it can no longer afford the research station, which costs about $2m a year to run.
Critics dismiss that argument, pointing to the Harper government's promotion of the Alberta tar sands and its opposition to the Kyoto protocol agreements on climate change.
"The Harper government is the most environmentally hostile one we have ever had in Canada. Harper pulled Canada out of the Kyoto protocol, gutted the Fisheries Act (our strongest freshwater protection law), and hollowed out our environmental assessment legislation, making it easier for extractive industries to get licences to exploit," said Maude Barlow, a former UN advisor on water and chair of the Council of Canadians. "It is heartlessly shutting down a programme that costs very little to run given the incredible benefits it brings, in order to silence the voices who speak for water."