Playing fast and loose with both climate science and logic in a Supreme Court brief is a good way to destroy your own credibility.
Click here for the other posts in this series
In June, 2012, the DC Circuit Court of Appeals’ ruled that the Clean Air Act permitted the EPA to regulate greenhouse gases such as carbon dioxide (CO2) as pollutants. Multiple industry groups and states appealed the ruling to the Supreme Court in March, 2013, and the Court agreed to hear part of the appeal in October, 2013. Specifically, the Supreme Court agreed to hear arguments on whether or not the Clean Air Act automatically required the EPA to regulate stationary sources like power plants as a result of the EPA finding that motor vehicles were a source of pollution. The Court refused to hear arguments on whether or not the EPA had the authority to find greenhouse gases a pollutant, or to regulate them as pollutants.
Even though the Supreme Court refused to hear arguments related to the EPA’s science-based endangerment finding, a group of 12 self-described “experts” submitted a Brief of Amici Curiae, also known as a “Friend of the Court” brief [hereafter “the brief”], to the Court on December 16, 2013. The brief asks the Supreme Court to overturn the DC Appeals Court ruling and the entire endangerment finding even though the Court refused to hear arguments on those issues.
While there are arguments within the brief against the specific question being heard (regulating stationary sources of carbon dioxide like power plants and cement kilns), the brief is primarily focused on manufacturing doubt about the overwhelming scientific evidence supporting the scientific theory of global warming, and on the EPA’s use of that science in its decision-making.
Over the next several weeks, S&R will present several investigations into the various claims made by the authors, starting with the scientific arguments presented in the brief. Following that will be an analysis of the expertise and credibility of the authors themselves, and the series will conclude with an investigation of the authors’ arguments with respect to the EPA’s stationary source regulations.
The scientific arguments presented in the brief are, almost without exception, examples of bad science and poor logic. First, the authors claim that CO2 will be good for plants in general and human food crops based on an incomplete and one-sided presentation of the plant science involved. Second, the authors create an elaborate straw man argument and use cherry picked data in an attempt to disprove the fact that greenhouse gases are the cause of global warming. Third, they argue against global warming by suggesting that cooling in one or two small parts of the Earth somehow offset the average trend. Fourth, the authors demonstrate a significant misunderstanding of the Intergovernmental Panel on Climate Change (IPCC) and in the process distort both climate model predictions and recent measured surface temperatures. And finally, the authors imply that carbon dioxide may not be a greenhouse gas, in opposition to over a century of observation and theoretical explanation.
Carbon dioxide and plants – a complicated relationship
Plants are directly affected by higher CO2 concentrations in four main ways – they use less water, they grow faster and larger, their tissues are less nutritious, and they are more resistant to stresses like drought and pests. However, higher CO2 also creates a bunch of indirect effects on plants – droughts are more common and worse, flooding is more common, there are more more plant-eating pests, plant diseases spread into new areas, and generally higher global temperatures. While CO2‘s direct effects tend to benefit plants, the indirect effects tend to be harmful. This is why the rosy picture painted by the brief’s authors – that CO2 will stimulate plant growth and thus more food for animals and people – is not necessarily accurate.
For example, plants will generally grow faster and use less water with more CO2 in the air. But because plants need less water from their roots, plants don’t absorb as much nitrogen from the soil. As a result, leaves, stems, wood, roots, and fruit have less protein in them. Less protein means that herbivores and pests (such as the pine beetle) have to eat more plants in order to get the same amount of nutrition for their growth. It’s not a foregone conclusion that plants will always grow faster than the animals that eat them. Yet the authors fail to mention this.
According to Daniel Taub, an expert on the effects of CO2 on plants, plants are unable to efficiently use the extra CO2 to grow if they don’t have enough nitrogen. A 2009 study into nitrogen fixation (the process by which nitrogen in the air is converted into a form that plants can use) found that global warming would likely result in a net decrease in the amount of nitrogen fixation globally. The study found that nitrogen fixation would increase outside the tropics due to warmer temperatures, those same warmer temperatures would kill nitrogen fixing bacteria in the tropics, more than offsetting the improvements outside the tropics.
Essentially, less nitrogen available to plants and less nitrogen used by plants will limit how much plants can grow in response to higher CO2 concentrations, possibly significantly.
The authors’ claim that more CO2 means more food is also wrong due. While there are likely to be higher yields for crops, research has shown that many crops will also be less nutritious, partially offsetting the gains from yields. More pests will also reduce yields, and crop diseases that are spreading from the tropics toward the poles will reduce yields even further, especially in monoculture farms common in industrial agriculture. In addition, some crops like corn and millet won’t see any improvements from higher CO2 because they already concentrate CO2 in their cells.
Recent research has found that we are already likely seeing lower crop yields for some crops in some areas due to increased global temperatures. As temperatures continue to rise globally, meeting the global demand for food will become more difficult.
Finally, just because plants need less water doesn’t mean that they are immune to the effects of drought and flooding that will result from higher CO2 concentrations. If plants need less water, then more water stays in the soil and so when it rains hard, less water soaks into the ground and the risk of flooding goes up. And just because plants use water 5-20% more efficiently, a drought that reduces water by more more than 5-20% will still kill the plants.
Given these details, it’s clear that the authors’ claims about CO2 being “plant food” are one sided. Had the authors included all these details, however, they would have had a harder time making their emotional appeal (a type of logical fallacy) on behalf of the poor who, the authors claim, will reap the benefits of hypothetically greater crop yields due to CO2.
The tropospheric hotspot is an elaborate straw man
The authors devote a significant amount of their brief to attacking the supposed “fingerprint” of global warming, namely that the troposphere is predicted to warm over the tropics. However, the troposphere is predicted to warm no matter what the cause of the warming – solar, geologic, or greenhouse gases. The actual fingerprint of greenhouse gas-driven global warming (aka industrial climate disruption) is the presence of a hot spot and the a cooling stratosphere, as shown in the lower panel of the image at right (from GISS, via Skeptical Science). The reason that so-called skeptics bring up the hot spot, however, is because it represents one of the few areas where observations appear to contradict model predictions. The brief’s authors use the apparent contradiction to create an elaborate straw man argument about how the lack of a hot spot supposedly destroys the entire concept of industrial climate disruption.
The authors’ straw man starts with the following:
[The EPA’s theory is that] in the tropics the upper troposphere is warming faster than the lower troposphere, and the lower troposphere is warming faster than the surface, all due to rising global atmospheric CO2 concentrations. (emphasis added)
The first half of the quote above accurately describes how the “hot spot” is supposed to come into being, and it’s called “tropical tropospheric amplification.” In essence, hot air rises and warms the upper parts of the troposphere in the process. The emphasized half of the quote is not accurate because any source of warming should create a “hot spot.” In reality, greenhouse gases like CO2 produce not just a hot spot, but they also cool the stratosphere. In the dense troposphere, heat moves most efficiently by way of wind and convection and it can’t be radiated directly into space efficiently. But in the stratosphere, heat is lost directly space more efficiently, and so adding more CO2 increases its cooling.
This is what makes the authors’ argument a straw man – they focus exclusively on a hot spot that should happen any time extra energy is pumped into the atmosphere, but they ignore the cooling stratosphere that only exists in the presence of greenhouse gases.
Not only do the authors focus on a straw man, but they attempt to disprove the straw man with cherry picked data. For example, there were four data sets to choose from for Figure 1A in the brief, but they chose the data set with the lowest trend (HadAT2 vs. RATPAC, RAOBCORE/RICH, and IUK). In addition, they cherry picked the trend from the upper troposphere and ignored the HadAT2 measurements for lower altitudes – measurements that actually show tropical tropospheric amplification.
And the brief’s authors ignored a “cautionary note” on the HadAT2 website about using the HadAT2 data by itself:
It is important to note that significant uncertainty exists in radiosonde datasets reflecting the large number of choices available to researchers in their construction and the many heterogeneities in the data. To this end we strongly recommend that users consider, in addition to HadAT, the use of one or more of the [other radiosonde datasets] to ensure their research results are robust.
Figure 1B in the brief also illustrates significant cherry picking. The authors again chose to use the one dataset that shows the lowest trend in the middle troposphere (UAH vs. RSS and STAR). In addition, the authors chose to use the a measurement band that is known to be corrupted by cooling from the lower stratosphere. This is tacitly acknowledged in the title of Figure 1B, which describes the measurements as going from the surface to 18 km altitude – well into the lower stratosphere.
There are datasets available where the stratospheric cooling signal has been removed from the measurements, including one generated by UAH. Each shows significantly more warming than the uncorrected band that the authors presented in Figure 1B.
While Figures 1A and 1B are heavily cherry picked, Figure 2 is not. But Figure 2 doesn’t represent what the authors claim it does. The text of the brief claims that Figure 2 represents the “Pacific Ocean Temperature,” but it’s actually a small region of the Pacific Ocean that is monitored for El Nino known as “Nino 3.4.” The authors claim that they can apply the temperatures from this small region (about 2.5% of the Pacific and about 3.5% of the Earth’s tropics) to the entire troposphere. For comparison, if the authors were making claims about the surface temperature of the United States, they’d be making the absurd argument that we can substitute the temperature of Michigan for the temperature of the US, including Alaska and Hawaii.
In addition, NOAA itself disputes the authors’ claim that there is no statistically significant trend in the Nino 3.4 region. In fact, NOAA created a new version of the Nino 3.4 temperature series specifically to address the warming trend in that region of the Pacific (at right).
Scientists are well aware that the data for tropospheric amplification is plagued with error – the US Climate Change Science Program (now called the United States Global Change Research Program) devoted an 180 page assessment report to the issue in 2006, and there has been significant new research into reducing the errors. Unlike the errors that plague tropical tropospheric amplification, the cooling trends in the stratosphere due to greenhouse gases have been unambiguously observed and have been found to match predictions. As such, the actual data conflicts with the authors’ claims about the hotspot “fingerprint” – tropospheric warming and stratospheric cooling have been observed, and so the actual fingerprint of GHG warming has been verified.
The authors built their straw man by omitting critical facts and using logical fallacies (one-sidedness, equivocation, and unrepresentative samples) masquerading as scientific arguments. But in the process of tearing apart an elaborate straw man, they undermined their brief and their own credibility as experts.
Global warming can’t be disproven with local changes
Surface temperature records show that the global average surface temperature has been rising at an unprecedented rate over the last 50 years and that the recent decade is the hottest decade on record. The authors claim these well-established facts are wrong, but instead of attacking the quality of the surface temperature record or challenging whether the rate is or is not unprecedented, the authors tried to disprove the global average with arguments that are regional or that are straw men themselves.
For example, the authors write that “‘global warming’ has not been global.” This is true, but it’s irrelevant because climate experts don’t claim that the Earth will heat up the same amount everywhere. This is why many scientists prefer the terms “climate change” or “climate disruption” to the term “global warming.” Some parts of the globe will warm a lot, some will warm a little, and a few will cool. Pointing to those few areas that have remained cool or only warmed a little is irrelevant.
The authors also claim that most of the warming has been in the Northern Hemisphere, rather than globally, and as evidence they graph satellite measurements of the lower troposphere since 1979 in the brief’s Figure 3. Beyond the fact that this is again irrelevant, the authors make at least five significant mistakes related to Figure 3:
- One can’t disprove 50 years of global warming using only 34 years of data (1979 to 2012).
- One should not use satellite measurements of the lower troposphere in place of direct surface temperature measurements given four different surface temperature datasets are available (GISS, NCDC, JMA, or HadCRUT4).
- Choosing the satellite data series with the lowest temperature trend (UAH instead of RSS) is cherry picking.
- One may not arbitrarily divide up data into two segments to create the false impression of the surface temperature leveling off. At best it’s misrepresentation of the data, and in this case it’s essentially what the bloggers at Skeptical Science call the Escalator (see above).
- One cannot disprove global average surface temperature warming using just data from the Northern Hemisphere and ignoring data from both the tropics and the Southern Hemisphere.
The authors then argue that hot and cold temperature records in the contiguous United States (exclusive of Alaska and Hawaii) also disproves increasing global average surface temperatures. If it’s not possible to use just the Northern Hemisphere to disprove global warming, then it’s absurd to try and rely on the 2% of the Earth’s surface that is the United States. In addition, it’s well known that the warmest decade on record for the US was the 1930s but that the warmest decade on record globally is the 2000s, and that every surface temperature dataset is in agreement on that point.
The authors could have presented an argument that the global average surface temperature wasn’t increasing. The could have presented an argument that the observed increase hasn’t been unprecedented in the last several thousand years (at least). They would have been wrong in both respects, but they could have at least tried. Instead they cherry picked and equivocated on the science and tried to redefine the word “global” to mean “regional,” and created an incoherent argument in the process.
Recent global warming is within projected climate model variation
In their final attempt to discredit the EPA and the scientific foundation of global warming, the authors attack climate modeling. As with their prior allegations, however, the attacks show a remarkable lack of scientific basis and a significant amount of data manipulation.
First, the authors make repeated reference to their straw man argument regarding the hot spot. However, since the climate models’ predictions about the actual fingerprint (warming in the troposphere, cooling in the stratosphere, arctic amplification) have been observed and the observations are statistically robust, the authors are clearly wrong.
Second, Figure 5 in the brief is obviously flawed to anyone with a detailed knowledge of climate science. The surface temperature dataset shown (HadCRUT4) has the lowest trend of the four major datasets. The legend identifies the trend line as “HadCRUT4 Trend/Forecast,” but using HadCRUT4 for forecasting is a misuse of the dataset. Such simple forecasts are known to be completely unreliable when used this way.
The biggest problem with Figure 5 is that the figure shows only the statistical mean for the various model scenarios shown (B1, A1B, A2, and “Stop CO2“). The actual model scenarios each have 95% confidence levels above and below the mean values, but those confidence levels aren’t shown in Figure 5. When those confidence levels are shown, as they are in the image at right from the website RealClimate, it becomes obvious that recent temperature trends are well within the model projections.
The most recent IPCC report has a graph that also shows how recent temperatures are faring vs prior projections, and again, the measured surface temperatures are well within the models’ projections.
The first climate models were created over 30 years ago. Since then they have become much more complex, but in the process they project climate far better today than they ever have. Even so, there are aspects of climate models that the authors could have chosen to attack and had some success. Instead they chose to distort both climate data and the results of climate modeling in a failed attempt to manufacture doubt about climate science.
The amici imply that carbon dioxide may not be a greenhouse gas
As bad as the previous allegations are, perhaps the most glaring issue is peripheral to the authors’ allegations. The authors write
Whatever so-called “greenhouse effect” CO2 may cause….
The greenhouse effect of GHGs, if any….
The substances at issue in this rulemaking are sometimes referred to as “greenhouse gases,” that is, gases that are posited to warm the surface temperature of the earth by emitting back to the surface some radiation that would otherwise escape into space. (emphasis added)
In each of the quotes above, the authors imply that greenhouse gases like CO2 may not actually create the greenhouse effect.
As a matter of scientific fact, the optical properties of CO2 are undeniable. CO2 is transparent to visual wavelengths of light and absorbs/re-emits infrared. These properties have been known since the late 1859 when John Tyndall measured the absorption, and it’s these properties that are responsible for CO2 being a greenhouse gas.
The explanation for why CO2 absorbs infrared light wasn’t understood until the development of quantum mechanics in the late 1800s and early 1900s, but scientists have understood it for at least 87 years (since Schrodinger published his equation in 1926). Essentially, the structure of the CO2 molecule is such that the atoms in the molecule vibrate in specific ways that makes CO2 absorb infrared light.
For the authors to suggest that CO2 might not be a greenhouse gas is to suggest that all of the facts above are completely misunderstood and have been misunderstood by thousands of independent scientists and engineers for at least 90 years. This is not a tenable position for the authors to take.
In their brief to the US Supreme Court, the 12 authors resorted to logical flaws, manipulated datasets, misunderstandings, and factual errors in order to make their case. They presented a one-sided and oversimplified argument about how the direct effects of higher CO2 would benefit plants but neglected to mention the direct and indirect effects that would harm plants. They created an elaborate straw man about the tropospheric hot spot and then resorted to cherry picking and distortion in a failed attempt to defeat their own straw man. Rather than attack the overwhelming scientific evidence of recent, rapid increases in global average surface temperature, the authors tried to redefine it to mean something that could be disproved using only regional data. They misused data and omitted necessary context that would have countered their own argument about climate model projections. And there is evidence that at least some of the authors deny over a century of observational evidence and established physics regarding the infrared properties of CO2 and why those properties make CO2 a greenhouse gas.
Given the authors claim to be “highly regarded scientists and economists” with “relevant expertise to support every statement,” how did so many obvious mistakes make it through their collective review and into the submitted brief? There are only four possibilities: the authors may be biased, lazy, not actually experts, or unethical.
No matter which of these explanations holds for the authors as a group or as individuals, any one of the explanations should disqualify the brief from serious consideration by the Justices of the Supreme Court.
In the second part of this series S&R looks at the authors’ credibility and expertise both individually and collectively to determine if their claim to be experts is justified.