Saturday, December 19, 2015

Watts et al Show Some Warming is Man-Made

Subtitle: Man-made Warming by Selecting Bad Temperature Sites

In this post, a new study (2015) by Anthony Watts, Evan Jones, John Nielsen-Gammon, and Dr. John Christy is discussed.  The Watts 2015 study showed that the USA's temperature trend over 30 years (1979 - 2008) as measured by the US Historic Climate Network stations was too high due to inclusion of temperature measuring sites that had and have artificial heating.  Watts 2015 showed that when only properly sited measurement stations are included in the data, the warming trend decreased substantially (2.0 degrees C per century without the "bad" stations, and 3.2 degrees C per century with.)    see link to Watts' article announcing the study, which was presented this week at the 2015 Fall Meeting of the American Geophysical Union in San Francisco, California. 

Other blogs have articles that discuss the Watts 2015 paper, some with thoughtful comments and of course the usual jibber-jabber.  Dr. Judith Curry's blog article is here, see link.  JoAnne Nova's blog article is here, see link.  Bishop Hill's blog article is here, see link.   The Chiefio (E.M. Smith) is away from keyboard (AFK) performing new grandpa duties, but his take is sure to be interesting, perhaps fascinating.  See link to Chiefio's blog.   This will update to link to his article (If) when it is published.    OK.  So, that is what some of the others are writing or have written.   Why should I write anything on this?

This is a good time and place to set out why I write on climate, and my qualifications.  I have written much of this before, and said this in various public speeches.   I am a chemical engineer and an attorney practicing in Science and Technology Law.  As a chemical engineer, especially one that deals with petroleum refineries, petrochemical plants, toxic chemical plants such as chlorine-caustic, and chemicals such as hydrofluoric acid, hydrochloric acid, sulfuric acid, liquid anhydrous ammonia, and highly explosive trade-secret reaction initiators (to name a few), I am acutely aware of the need to use only good, valid data and to screen and exclude invalid data.   In short, if chemical engineers fail to find and exclude invalid data, our chemical processes will leak, spew toxic chemicals, catch fire, explode, and create serious harm and death.   We take the data analysis aspect of engineering very seriously, because we must.  

As an attorney, I watch the climate science, and some scientists in that arena, with great dismay.  There have been many regulations established already (e.g. California AB 32, the "Global Warming Solutions Act of 2006"), multi-lateral treaties (Kyoto Protocol 1997), and non-binding climate agreements (Paris 2015).  This is merely a partial list of government acts concerned with climate science.   It is instructive that governments require science-based regulations to be based on good science, or best available science.  What, exactly, qualifies as best available science is a substantial part of the problem.  

The pedigree of the scientists and the scientific organization matters.  In the USA, NOAA (the National Oceanic and Atmospheric Administration) is the federal agency that is supposed to not only know what it is doing, but doing their function at the highest level of expertise and accuracy.  In pertinent part, NOAA's mission statement reads: "(NOAA's Mission is) To understand and predict changes in climate, weather, oceans, and coasts, To share that knowledge and information with others, . . ."   NOAA's home page on the website goes on to state: 

 "NOAA’s dedicated scientists use cutting-edge research and high-tech instrumentation to provide citizens, planners, emergency managers and other decision makers with reliable information they need when they need it.

NOAA's roots date back to 1807, when the Nation’s first scientific agency, the Survey of the Coast, was established. Since then, NOAA has evolved to meet the needs of a changing country. NOAA maintains a presence in every state and has emerged as an international leader on scientific and environmental matters."   (bold, underline added - RES)

NOAA, then, is the expert, the professional, the go-to agency that not only monitors the climate, but makes sense out of the data and presents trends and conclusions for decision-makers.   One would expect, then, that their results can be trusted.   Yet, they cannot. 

In a nutshell, what Watts 2015 did was attempt to analyze 30 years of temperature data from NOAA's measuring stations located across the USA, with the express purpose of identifying bad stations and excluding them from the data.   The measuring stations comprise 1218 weather stations in the USHCN, the Historical Climate Network, which were critically examined by Watts and others for allocation into 5 categories of Excellent to Incredibly Bad (my terminology, not theirs).  Actually, Category 1 is best, and 5 is worst.   Watts 2015 focused their attention on the two top categories, 1 and 2.     Examples of the differences between a 1 and a 5 are: a 1 is in an open grass field a safe distance from any artificial heat source, while a 5 can be on an asphalt parking lot adjacent to a dark brick building and in the path of an air conditioner condenser exhaust. 

Watts 2015 apparently included both Category 1 and 2 as producing acceptable temperatures, excluding all Category 3, 4, and 5.  Note, however, that NOAA includes all the stations, with various corrections applied as they see as appropriate.  But, Watts 2015 went further, they excluded many stations in Category 1 and 2 due to issues such as non-continuous location (somebody physically moved the station over the years, as happened here in Los Angeles a few years ago).  Watts 2015 also excluded stations that had corrections for Time of Observation, and any stations that had an equipment change over the years.   Watts 2015 was left with 410 stations that met their criteria, or approximately one-third of the 1,218 total stations. 

(Note:  a concern arises at this point.  Referring to the Fall et al 2011 paper that also was co-authored by Watts, Jones, Nielsen-Gammon, and Christy, the USHCN is purported to have 1221 stations, not 1218.  Also, from Figure 1 of Fall 2011, the Category 1 and 2 stations combined are 7.9 percent of 1007 stations that were surveyed for critical assessment and placement into the 5 categories.  Some simple math shows that something does not add up.    It appears that 80 stations out of the 1007 represents 7.9 percent.   Even allowing for all of the remaining, un-surveyed stations to wind up in Category 1 or 2, (highly unlikely), that yields 1221 minus 1007, or 214 stations that can be added to the 80 from above.   That provides only 294 stations at most.  Where, then, did Watts 2015 obtain 410 stations in Category 1 and 2?   I hope to find an answer to this simple math question.   It appears that Watts 2015 excluded some of the Category 1 and 2 stations based on this statement from Watts on WUWT: "It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable  had lower trends that would have bolstered our conclusions."  Therefore, if 294 were available at most, but some were excluded, how did Watts 2015 end up with 410 valid, Category 1 or 2 stations? )

Now to the second point: end-point influences.    Watts 2015 chose the time period for analysis to be 1979 to and including 2008.  The reason given for this time period is that this paper is designed to challenge and rebut the conclusions produced in two other papers, Menne et al 2009, and 2010.  The Menne papers were written purportedly to defend NOAA's methodology for including badly-sited stations, correcting the measured temperatures, and including those temperatures in the database and analysis.   As Watts wrote on his blog (see link above):  

"Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN (should be USHCN - RES) from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison."

As written several times on SLB, there is a problem with any study that uses the late 1970s as a starting point for a time-series trend of air temperatures, and extending that trend to claim the climate is warming.   I wrote on this back in February 2010, and spoke on this in a public speech to chemical engineers in 2012.   The USA had severe winters in the period 1977, 78, and 79 as documented in many articles at the time  (see link for one such article), and shown in temperature graphs from many US cities (see link).    One such temperature graph is shown below to illustrate, this is from Abilene, Texas and the Hadley Climate Research Center HadCRUT3 dataset:



The significant portion of this Abilene graph is the cluster of low temperatures around 1977, 78, and 79.   When any data set starts with a very low value, and the remaining data oscillates around with very little trend, the resulting trend will be upward.

(A side note on severity of the winter of 1978-1979 in Illinois, excerpted from the linked article above:

"For the first time since modern weather records began in the 1880s, a third consecutive
severe winter occurred in Illinois in 1978-1979. Seventeen major winter storms, the state's
record coldest January-February, and record snow depths on the ground gave the winter
of 1978-1979 a rank as the second worst statewide for Illinois, exceeded only by the prior
winter of 1977-1978 (18 storms, coldest statewide December-March, record longest lasting
snow cover). In the northern fourth of Illinois, 1978-1979 was the worst winter on
record.

Severe storms began in late November and extended into March; the seven major storms
in January set a new record high for the month, the four in February tied the previous
record, and the four in December fell one short of the record. Fourteen storms also had
freezing rain, but ice was moderately severe in only two cases. High wind and blizzard
conditions occurred in only three storms (compared with eight in prior winter), suggesting
a lack of extremely deep low pressure centers. Most storms occurred with Texas lows,
Colorado (north track) lows, and miscellaneous synoptic conditions. The super storm of
11-14 January set a point snow record of 24 inches, left snow cover of more than 3 inches
over 77% of the state, and lasted 56 hours.

Snowfall for the 1978-1979 winter averaged 68 inches (38 inches above normal) in
northern Illinois, 40 inches (20 above) in the north central part, 32 inches (12 above) in
south central Illinois, and 31 inches (22 above) in southern Illinois. Record totals of 60 to
100 inches occurred in northern Illinois. The winter temperatures averaged 7.8 F below
normal in northern Illinois and about 7 below in the rest of the state. January-February
temperatures averaged a record low of 15.9 F, 14 degrees below normal, and prevented
melting between storms so that record snow depths of more than 40 inches occurred in

northern Illinois." )


The Watts 2015 study should not, in my opinion, be judged as conclusive on the issue of warming occurred at the rate of 2 degrees C per century.   The starting point of 1979 is low due to severe winters in that time. 

Therefore, it can be concluded that Watts 2015 can be commended for showing the NOAA methodology overstates the warming by 60 percent (3.2 degrees versus 2.0 degrees per century.)   A necessary next step is to do what I have recommended for years (not only I, as others have also noted this and stated the obvious): find temperature records from the pristine areas across the USA, in national parks and other undisturbed areas.   Use those records.   It may be that such records do not exist, and that may be why so much effort is expended as Watts 2015 did, also Fall 2011.    However, it would seem to be a trivial task to obtain small-town newspaper archives and collect the published temperatures from across America back 100 years.   

Finally, it is certainly misleading, and quite possibly fraudulent, to make claims of global warming by analyzing data that begins with the late 1970s.   One could do worse, however, by starting the data in 1977 (that gives 3 years of low temperatures to start), and end the data on a high-temperature year such as 2000 (that gives 2 years of high temperatures to end on.)    Note that fraud in the legal context has many elements that must be proven, one of which is intent to obtain property from another.  No assertion of fraud is made, nor to be implied, against any of the authors nor organizations mentioned in this article.   Instead, Watts and co-authors, and Menne and co-authors were probably doing the best they know how, given the motivations and constraints at the time.    It is also noteworthy that the entire Watts 2015 paper has not been published yet, and all comments above are based on my best understanding of what is published on WattsUpWithThat.  

We must, however, do better.  There must be no data adjustments, no room for bias, and no end-point issues as described above.   Watts 2015 apparently tried to find unadjusted data, even though it is unclear how 410 stations exist in 2015 while only 80 or so existed in 2011. 

Next, we must have a study that shows what warming, or cooling, if any, occurs in pristine locations without any starting and ending data issues.   Such data is slowly being produced via the USCRN stations, (Climate Reference Network) see link

Roger E. Sowell, Esq.
Marina del Rey, California

copyright (c) 2015 by Roger Sowell all rights reserved

No comments: