Wednesday, October 5, 2011

On Nuclear Power Plants

I was part of a discussion over the past few days on WattsUpWithThat, that started out (the primary topic) discussing Obama's statement that within 5 years, the US would have a car battery that would achieve 130 miles per gallon.   While that is a non-sensical statement from an engineering standpoint, it was made by a career politician who is also an attorney - that is, he has zero background in this.  He did refer to Energy Secretary Chu, a Nobel-prize-winning physicist who should have known better than to make such a statement.  However, it is possible that Obama got it wrong and mis-quoted Secretary Chu.

Batteries contain what is referred to as amp-hours, not gallons, and the amp-hours are available at a certain voltage (within a relatively low tolerance).  That is, the battery's voltage will decline somewhat as the battery is discharged.  Standard car batteries in the US operate at 12 volts, nominally.  Electric car batteries operate at far higher voltages, and this varies depending on the manufacturer.  

At some point in the commentary on WUWT, nuclear power plants were brought up and their virtues were extolled.  I, of course, stand ready to refute any and all such extollations because nuclear power is about the worst way to generate electricity known to man.

As support for my proposition, I cited two studies, one being the excellent analysis by Craig A. Severance, CPA, where he produced results that show a new  US-built nuclear power plant must charge 25 to 30 cents per kWh in order to pay for the investment plus operating costs.    Unfortunately, that paper does not appear to be available on the internet at this time.  Even then, Severance's number is too low because it did not reflect the subsequent US requirement that all new nuclear power plants be designed and constructed to withstand a direct hit from a large commercial aircraft.  Furthermore, the new plants must be designed so that not only the containment building is intact, but also the cooling system and the spent fuel storage area.   I have stated that this requirement should add another 5 cents per kWh to that calculated by Severance, thus bringing the cost to 35 cents per kWh.

The second study I cited, since the doubters in the comment thread demanded "proof," was from the California Energy Commission, a state agency, and their published comparison of multiple generating technologies for both 2009 and for 2018.  The report is "Comparative Costs of California Central Stations Electricity Generation" dated January 2010. One of the technologies is a single reactor, Westinghouse AP-1000 design that produces 960 MW.  Their assessment also concluded that a merchant nuclear plant's levelized cost of electricity is 34 cents per kWh.  They also assessed Investor Owned Utilities and Publicly Owned Utilities, with costs at 27 and 17 cents, respectively.

The WUWT commenters of course disagreed, and cited some other studies giving the cost of power from new nuclear plants at 3 to 6 cents per kWh.   This is, of course, ludicrous.  Anyone with the slightest background in cost estimating and financial analysis will conclude that no project can be built without massive subsidies that costs $8 to $10 billion, requires 4 to 8 years construction time, and produces only 1000 MW electricity at maximum output, plus relies on sale of electricity at 6 cents per kWh.  One must bear in mind that the plant also must shut down periodically for refueling and will incur other operating problems that curtail generation.  Therefore, over the long term, the 1000 MW will be derated to approximately 850 to 900 MW.

There are several key points to keep in mind about the true costs of nuclear power.  First, what is the cost of the design, materials, services, equipment, and labor to construct.   This would be the "instant" cost, that is, if it could all be built in a single month, what would it cost?  Typically, the instant cost is approximately $4 billion for a 1000 MW plant.  California Energy Commission (CEC) used $3.95 billion for 960 MW.   But, of course a nuclear power plant cannot be constructed in a single month and will be built over a period of several years.   The longer the construction schedule, the more that two other items become important: materials and labor inflation, and financing costs.

The great debatable item is the time to construct.  Nuclear power proponents insist that new plants will be built in only 4 years, or 48 months from groundbreaking to first generation.  This has never been the case in the US, and indeed throughout most of the world.  Typical for the US is 7 to 10 years, and some lasted much, much longer.  Even in Europe, the plant being built in Finland is years behind schedule and has issued no expected completion date  (see second half of my article at this link).

As the construction period increases, so too will the costs of materials and labor increase due to inflation.  Nuclear power plants require great quantities of concrete and steel, which are subject to cost inflation.  Also, each year that construction continues adds a higher and higher amount of interest on the financing costs.  For a large nuclear project, it is common for the financing interest alone to reach $1 billion per year in the latter years of construction, especially for a two-reactor plant with both reactors proceeding at the same time.

Another key aspect of a nuclear power plant being constructed is the need to reassess the design and incorporate any lessons learned from recent disasters or mishaps from the approximately 400 operating reactors around the world.  This is frequently cited by nuclear advocates as the key reason plants' costs spiraled out of control in the 70s and 80s, and they insist that such days are behind us and nuclear power plant design is now mature.  This is not the case, as the recent disaster at Japan's Fukushima nuclear complex clearly demonstrated.  That disaster was the result of two almost-simultaneous events, a large earthquake and a large tsunami a few minutes later.   Many nuclear advocates point to the land-locked sites of nuclear power plants in the US and concluded that they are perfectly safe because a tsunami cannot possibly reach them.  However, a recent earthquake on the East Coast shook at least one nuclear power plant and the structural damage is not yet clear.  The simple fact is that we do not know how to predict the largest earthquake that could strike.  We could design for an earthquake of magnitude 7.0 and then experience an earthquake of 8.0 or even 8.5.

Also, earthquakes are not all the same.  Some shake the ground from side to side, others have more vertical shaking.  Some are a combination.  Designs for such earthquakes are very complex.

Yet another key aspect of new nuclear power plants is the intense opposition by well-funded groups that bring lawsuits to halt construction.  The anti-nuclear sentiment is very strong around the world, and in the US.  The memories of the faulty construction, gross abuses during construction, and sheer incompetence of some project management from the 70s and 80s is still very fresh.  If the next round of nuclear power plants also has the same shoddy workmanship, intimidation and threats to inspectors and auditors, the advent of the internet will ensure rapid whistleblowing.  Delays will inevitably result.

Furthermore, it can now be shown via various studies that new nuclear power is not a cost-effective means of generating power  (see Figure 1, below).  It can be argued that the state governing boards must agree to power projects that provide safe, reliable, and low-cost power to the public.   Nuclear power does not fit those criteria.
Figure 1
Relative Costs of Power Generation in 2018
Source: California Energy Commission study from January 2010
Note that the Nuclear Power Plant is the most expensive, except for the
three simple-cycle natural gas plants that are used for peak power only

Finally, nuclear power plants consume far more water per unit of electricity produced than almost any other technology.  The reactor must be kept cooled, and the steam from the turbines must be condensed.  A nuclear plant will deliver approximately 3 times as much heat into cooling water as is delivered as electricity.  In contrast, a natural gas-fired combined cycle gas turbine plant will use approximately one-fourth of that cooling water.  Stated another way, the nuclear plant will require 4 to 5 times as much cooling water.  By cooling water, the meaning here is water that is evaporated in the heat removal process.

For all these reasons, nuclear power plants should never be built.  There are far safer, more cost effective, and less water intensive means of producing electricity for the future.

Roger E. Sowell, Esq.
Marina del Rey, California

Friday, September 30, 2011

Saudis to Build Nuclear Plants at $7 Billion Each

"[T]he kingdom [of Saudi Arabia] will build 16 nuclear reactors by 2030 at a cost of around $7 billion each." - source.


In an ever-growing list of countries that either are building, or plan to build, nuclear-powered electric power plants, none are building at an affordable cost.  The USA, Finland, China, now Saudi Arabia all publish numbers that indicate a new, 1,000 MW reactor costs anywhere from $7 to $11 billion.  China is building a six-reactor plant for $66 billion, or $11 billion apiece.   The recently-cancelled South Texas Nuclear Project Expansion in the USA was to cost $17 billion, but that was just a dream; no shovel had been turned and no delays had yet started, with the inevitable increase in financing costs.  Fully costed, the STNP expansion would be at least $22 billion, more likely $25 billion.  


At these price levels, electricity must be sold for at least 35 cents per kWh, just to pay for the investment and provide a reasonable return.  


The Saudis indicated that their growing economy requires a 7 percent per year increase in electric power production.  They don't want to burn oil for making power, they would rather sell the oil.  Thus, the need for nuclear power plants.  The Saudis are smart, as I've written before, but they are mistaken on this one.  No economy grows, nor can it grow, at much above 3 percent per year for very long.  A temporary growth spurt might occur of 7 or 8 percent for a year or two, but this is not sustainable.  


Thus, there is no need for the nuclear power plants. The Saudis should, instead, do what the rest of the world does where economics are important: build combined-cycle gas turbine power plants (CCGT).  The Saudis have access to natural gas in the Middle East, and could easily purchase what they don't self-produce.  These CCGT power plants are much more efficient than conventional steam-based power plants, at 59 percent compared to approximately 35 percent.  They also do not use nearly as much water, which is a huge consideration for nuclear power plants.  Where, and how, will the Saudis obtain sufficient cooling water for 16 nuclear power plants?  Nuclear plants require at least twice as much water for cooling, compared to the CCGT plants.  Of course, the nuclear power plants could be built on the coast and use seawater.  This greatly increases the cost of the plant because seawater is more corrosive than fresh water. 


Perhaps the Saudis have another motive, from watching what the Iranians have done in the past several years with their nuclear "power" program.  Perhaps, just perhaps, the Saudis are in a race for parity and do not want the Iranians to have the upper hand, even in nuclear power plants. 


Roger E. Sowell, Esq. 
Marina del Rey, California

Wednesday, September 28, 2011

EPA CO2 Endangerment Finding Review by OIG


The US EPA’s Office of Inspector General (OIG) released today its report titled “Procedural Review of EPA’s Greenhouse Gases Endangerment FindingData Quality Processes.”

It is important to note that this was a Procedural review and not a Substantive review of the underlying science.   Procedural review merely means comparing the procedures that EPA used to what is required under the various laws and regulations.   Substantive review means evaluating the data and science that EPA relied on in forming their Endangerment Finding.   The Procedural rules that EPA must follow depend on whether the Technical Support Document (TSD) is a “highly influential scientific assessment” or not.  OIG considers the TSD to be a highly influential scientific assessment, but EPA did not.   There is a higher standard of care, or procedures that must be followed, for a highly influential scientific assessment.  It is these additional procedures that OIG found lacking in EPA’s work.

Background

For some background, and a description of a TSD:  As the primary scientific basis for EPA’s finding, the [EPA] relied upon assessments conducted by other organizations [the IPCC, National Research Council, and US Global Change Research Program].  EPA summarized the results of these and other scientific assessments in a technical support document (TSD).  There are specified criteria by which a document is to be judged to determine if it is a highly influential scientific assessment.  OIG presents these criteria in its report as:

“A highly influential scientific assessment is a scientific assessment that: 
·       
  • A) Could have a potential impact of more than $500 million in any
    year on either the public or private sector, or

    B) Is novel, controversial, or precedent setting, or has significant
    interagency interest.”


OIG stated the level of peer review for the highly influential scientific assessments, and goes on to say that:

“For highly influential scientific assessments, OMB guidance requires more attention to peer review consideration such as individual versus panel review, timing, scope of the review, selection of reviewers, disclosure and attribution, public participation, and disposition of reviewer comments. If the material to be disseminated falls within OMB’s definition of highly influential scientific assessment, OMB requires the agency to adhere to the peer review procedures identified in Section III of its bulletin.
OMG guidance also requires that agencies certify compliance with the requirements of the bulletin and information quality guidelines when using influential scientific information or highly influential scientific assessments to support a regulatory action. This certification and other relevant materials should be included in the administrative record for the action.”

Next, OIG discussed what the EPA did procedurally.  “EPA had the TSD reviewed by a panel of 12 federal climate change scientists. This review did not meet all [Office of Management and Budget] OMB requirements for peer review of a highly influential scientific assessment primarily because the review results and EPA’s response were not publicly reported, and because 1 of the 12 reviewers was an EPA employee.”

No public reporting of the 12 scientists’ review, no public reporting of EPA’s response to that review, and having an EPA staff member as one of the 12 scientists were cited as procedural errors. This is essentially, for the first two errors, a lack of transparency.  The public does not know what the reviewers found and reported, nor the EPA’s response, if any.  Were the findings unanimous?  Or, was there a split of opinion?  Did the EPA ignore the review panel’s findings?  At this point, we don’t know.   The obvious conflict of interest from the reviewer who is an EPA staff member should have made his or her opinion or vote irrelevant.   OMB requires an external peer review.

Reasons Given by EPA why TSD was not Considered a Highly Influential Scientific Assessment

“They [EPA} noted that the TSD consisted only of science that was previously peer reviewed and that these reviews were deemed adequate under the Agency’s policy. They also stated that, as described in the final Federal Register notice, the Administrator primarily relied upon assessments conducted by other organizations rather than the TSD, which summarizes the conclusions and findings of these other assessments.”

End Results

It appears that the OIG will allow the Endangerment Finding to stand, and is recommending only that EPA revise its procedures for future.   This could be a wrong interpretation, however nowhere in the OIG report is the EPA required to revise or re-issue the missing transparency documents, nor hold a second and independent review by qualified scientists. 

The fact that only procedures were evaluated means that the clearly false statements and conclusions of many of the peer-reviewed papers and documents were considered acceptable by EPA.  As reported earlier on SLB, the EPA accepted such wildly inaccurate statements as glaciers disappearing in the Himalayas.  Also, as the State of Texas wrote in their recent petition, regarding the Climategate emails,  

"[t]he emails do not reflect the work of objective
scientists dispassionately conducting their work and zealously pursuing the truth. Rather
they reveal a cadre of activist scientists colluding and scheming to advance what they
want the science to be—even where the empirical data suggest a different outcome." Also, "to the extent their [these scientists'] objectivity, impartiality, truthfulness, and scientific
integrity are compromised or in doubt, so too is the objectivity, impartiality, truthfulness,
and scientific integrity of the IPCC report, the CRU temperature data, the NOAA
temperature data, and other scientific research that is shown to have relied on their
compromised research."


Texas' petition also shows how the IPCC authors manipulated the climate temperature data, citing the by-now infamous email of using a "trick" to "hide the decline." Also, especially egregious data manipulation is discussed with Russian and New Zealand temperature data. Such manipulation showed undue warming. Also, the IPCC admitted they have lost critical climate data.



Then the real fun begins, with several major discredited claims, using non-peer-reviewed sources. These include Himalayan glaciers receding faster than anyone thought (the aren't). Also, Chinese temperature data was seriously flawed, and had no source documents. They made up the data. Next, the claim that 55 percent of the Netherlands is below sea level, and subject to inundation from sea level rise. This is erroneous, as only 26 percent is below sea level. The fourth and final example included in the Petition for Reconsideration is the wild claim that "up to 40 percent of the Amazonian rain forest could react drastically to even a slight reduction in precipitation." This was from the non-scientific, but wildly agenda-driven World Wildlife Federation, the WWF.

Apparently, these types of "peer-reviewed" scientific conclusions on the impact of man-made CO2 on the planet's climate are acceptable to the US EPA.  


Roger E. Sowell, Esq. 
Marina del Rey, California



Wednesday, September 21, 2011

US Long-Term Temperature Trend from NCDC

There is a problem in the NCDC data (National Climatic Data Center, US Department of Commerce, NOAA Satellite and Information Service) for the United States.    The problem is that the reported average temperature trend for the US does not agree with the mean, nor the area-weighted average, of the 48 contiguous states.   NCDC reports the temperature trend for the 48 contiguous states is 1.2 degrees F per century.  However, the mean of the individual states is 0.78 degrees F per century, and the area-weighted average for the 48 states is 0.74 degrees F per century.

This is a problem.  If the NCDC cannot get it right, how much of their data is wrong, and how many other statements issuing from there are also wrong?

Below (Figure 1) is a simple table, listing each of the contiguous 48 states in the US, alphabetically, with the temperature trend next to each state, in degrees F per century.

Figure 1
US 48 Contiguous States and Long-term Temperature Trend, Deg F/Century
Data from US NCDC

The area-weighted average was computed by weighting each temperature trend by the relative geographical area of each state.  This does not change the average much, but gives a better number because small states (Rhode Island, Connecticut, Delaware, etc.) do not have an undue influence over  large states (Texas, Montana, California, etc.)  

Other things pop out upon closer inspection of this table.  

There is a problem of uneven heating in adjacent states.  As an example, Texas shows a trend of zero degrees F per century, yet its neighboring state to the north, Oklahoma, is warming at 0.7 degrees F per century.  This is not likely, nor is it due to CO2 or any other so-called "greenhouse" gases in the atmosphere.  As I have stated before, how does CO2 know to ignore the entire state of Texas, yet concentrate its radiant heat beams on Oklahoma?   Note that, in earlier posts on SLB, I pointed out that adjacent cities have grossly different warming trends, again showing that CO2 cannot do what climate scientists claim it does.  

This gets even worse when one examines Texas' westerly neighbor, New Mexico.  New Mexico is warming at the rate of 0.9 degrees F per century.  How does CO2 know to focus its beams on New Mexico, yet ignore Texas?

Another example is the pair of states, Oklahoma and Arkansas.  Oklahoma, as stated just above, is warming at just under the national average at 0.7 degrees F per century.  Meanwhile, its neighbor to the east, Arkansas, is cooling at minus 0.3 degrees F per century.   Again, one must question how is this possible, if CO2 is responsible for the warming?  How can Arkansas be cooling?   I've been to Arkansas and can attest to the great lush green growth in that state, as CO2 abounds.  

Yet another example is the adjacent states of North Dakota and South Dakota.  North Dakota is warming at the alarming rate of 2.5 degrees F per century.  Its neighbor to the south, South Dakota, however is warming at half that rate, 1.2 degrees F per century.    How does CO2 know to focus so much energy from its heat rays on North Dakota? 

Yet another example is the adjacent states of Pennsylvania, and New York.  Both are of comparable size and located in the Northeast.  Pennsylvania is warming very slightly at 0.1 degrees F per century.  However, New York to its immediate north is warming at a much higher rate of 1.2 degrees F per century.  Again, how does CO2 know to ignore Pennsylvania and concentrate its heat rays on New York?

Then, there is the entire band of states along the edge of the Gulf of Mexico: Louisiana, Mississippi, Alabama, and Georgia.  Their reported temperature trends are zero for Louisiana, but negative for the others: Mississippi (negative 0.7 degrees F per century), Alabama (negative 0.8 degrees F per century), and Georgia (negative 0.6 degrees F per century).   Contrast those to Florida, immediately south of Georgia, which has a warming of 0.3 degrees F per century.  How could CO2 ignore the southern states but heat up other states? 

There are other curious adjacent states with wide disparities:

California: 0.7 degrees F per century, and Nevada to the east at 2.3 degrees F per century.  

Michigan: 0.1 degrees F per century, and Ohio to the south at 0.7 degrees F per century.  

Finally, the overall trend of 1.1 degrees F per century for the US contiguous 48 states is repeated on the US EPA's website, with the following text:   (note that the EPA website uses 1.1 degrees, while NCDC reports the trend is 1.2 degrees.  Perhaps that is acceptable for government work, and is lost in the rounding error.)  


United States Surface Temperature Trends
Observations compiled by NOAA’s National Climatic Data Center indicate that over the past century, temperatures rose across the contiguous United States at an average rate of 0.11°F per decade (1.1°F per century). Average temperatures rose at an increased rate of 0.56°F per decade from 1979 to 2005. The most recent eight-, nine-, and ten-year periods were the warmest on record.

Warming occurred throughout most of the U.S., with all but three of the eleven climate regions showing an increase of more than 1°F since 1901. The greatest temperature increase occurred in Alaska (3.3°F per century). The Southeast experienced a very slight cooling trend over the entire period (-0.04°F per century), but shows warming since 1979."   (bold emphasis added)

Meanwhile, the coastal regions of the west coast (Washington, Oregon, and California) all show a very sudden and steep temperature decline since 2002.    The average for the coastal areas is negative 21 degrees F per century.    One can only wonder why CO2 has abandoned the warming task set for it by climate scientists.   Perhaps the coastal cooling has more to do with the rapidly cooling Pacific Ocean along the west coast of the US.  

In summary, one can only wonder at what other examples of gross exaggeration are to be found upon close inspection of the data, and the conclusions arrived therefrom by the alarmist climate science community.  Also, the individual states show gross disparities in warming rates, from a high of 2.5 for North Dakota to a low of negative 0.8 for Alabama.  Adjacent states show gross disparities that indicate that CO2 cannot be causing any warming at all.  CO2 cannot act capriciously, but must act uniformly if it is indeed a physical phenomenon and not a figment of imagination. 

Roger E. Sowell, Esq. 
Marina del Rey, California.  Where it is indeed growing colder year by year. 


Sunday, September 11, 2011

From Man-made Global Warmist to Skeptic, My Journey



By Roger E. Sowell, Esq. 
Marina del Rey, California September 11, 2011


Several of my friends have asked me lately how I can be so positive that CO2 is not the evil, planet-killing pollutant that the science community insists that it is.  This missive is a partial response to those friends.  I will have more to add, likely some figures, charts, graphs, links to other sites and such.  But, here is the first effort.  Fair warning:  this is a long, long piece.  It covers a lot of ground.  It is as accurate as I can make it.  I haven't delved into the "why", but concentrated on the "what" and some of the "how."  There are a few "whos" in here, also.   Most of this has been covered by me in one or several earlier posts on SLB.  

Scientists said CO2 emissions are heating the Earth, with dire, even catastrophic consequences about to happen: ice caps melt, sea level rise, shores inundated, massive very strong hurricanes and more of them, heat waves with deadly tropical diseases moving into temperate zones, crop failures due to heat and desertification, human health pandemics from heat-aggravated issues, snow disappearing from the California Sierra Nevada range, fresh water shortages, the list went on and on. 

The solution, the scientists said, was to stop using fossil fuels, e.g. natural gas, coal, and oil.  Instead, we were to conserve and learn to use less electric power, drive electric cars, make gasoline from corn-based ethanol, make diesel from recycled animal fats and seed oils, recycle all our garbage into trash-burning power plants, build wind-turbines and solar panel-farms to generate electricity, collect methane from dairy farms’ manure pits and landfills, then burn the methane for fuel in power plants.  But, until those technologies could carry the load, we had to capture CO2 from power plants and big furnaces, and hide it away forever.  This hiding was named “sequestration.” 

My interest was piqued, to say the least.  The Earth is becoming un-inhabitable?  Millions of climate refugees would be on the move, seeking places to live?  Wars would be fought over food, and fresh water?  Coastlines would flood and be gone forever?  And this is all due to our fossil fuel use?  My industry?  The oil and gas industry?  

I knew that chemical engineers would be involved, and in the thick of it, too.  Chemical engineers are the ones that know how to provide substitutes for oil, for diesel, make ethanol from corn or cellulose.  Chemical engineers also are the ones that know how to design, build, and operate a CO2-capture plant, and find ways to either chemically bond the dangerous CO2 or find ways to permanently store it underground as part of that sequestration. 

So, I began to look into what chemical engineers could do to solve the problem, and seek ways to benefit from my chemical engineering background, and legal expertise as an attorney.  Surely, there would be some opportunities in all this for a guy with my skills.  I had to do my due diligence, and verify the scientific claims.   First, just how does CO2 cause all this warming?   I had worked with CO2 for decades, in many forms and many places.  CO2 is a combustion product (along with water vapor) from burning natural gas, other light hydrocarbons, gasoline, diesel, fuel oil, petroleum coke, wood, coal, even peat and dried animal dung.  I knew more than most, I supposed.  I had designed and installed and operated liquid CO2 storage tanks, equipment to gasify liquid CO2, to compress it, to re-liquefy the compressed gas.  I had also designed and operated process equipment to scrub CO2 out of a furnace’s flue gases, and designed and operated other process equipment that made solid particles out of the CO2.  I figured there would be lots of opportunities for me.

Finding out how much CO2 needed to be removed seemed like a good place to start.  I began by reading blog posts on a website called RealClimate.org, where they claimed “real climate science written by real climate scientists.”   That seemed like a good thing, to get the information right from the experts.   I saw there some charts and graphs,  and I understand charts and graphs.  Chemical engineers know all about such things.   One of those graphs showed the earth’s global average temperature since about 1880 up until 2005.   There was a dramatic and noticeable upward trend from around 1975 until the present.  That trend, if it continued, would certainly appear to make the world hotter, and indeed, perhaps the ice caps would all melt. 



So, being a good engineer, and a lawyer trained to look at all sides of the issue, I looked at the rest of the chart.  It looked a bit odd, to me.  You see, there was a rather flat area from around 1940 to 1975, or perhaps even a slight downward trend in those 35 years.  Hmmm…I wonder what caused that?  Perhaps CO2 was going down in that period?  Made a note to check that out.

Then, the period before 1940 really caught my eye.  From about 1900 to 1940, the graph showed a remarkably similar upward trend, just like the one from 1975 to 2005.  Hmmmm, again…how did that warming trend happen?  Was CO2 rising in those days?  And if it was, why did it stop around 1940?  The world was in a global war in the late 30s and first half of the 40s….did we not use any coal, or oil, or natural gas in those days?   Something seemed not quite right about that, as I distinctly remember from my reading about the oil industry that oil production rose dramatically during World War II, due to all the military machines that needed gasoline and diesel fuel, and the ships that needed fuel oil to run.  I knew that atomic power was not around until well after the war, so all those ships were running on heavy fuel oil, what we referred to as Bunker C oil.   Also, factories across the world were humming at full capacity before and during World War II, turning out munitions, steel, aluminum, war machines, tanks, jeeps, ships, and all the other trappings of war.  Not much conservation going on there, I thought.  Lots of CO2 being emitted, too.  Nobody cared about efficiencies or conservation, or even pollution, the only thought was for more production as fast as possible.  There was a war on, after all. 

I then looked into the amount of CO2 in the atmosphere, thinking it was some great percent, probably 3 or 4 or maybe 5 percent.  The graphs I found looked wrong, at first.  The CO2 was nowhere near 5 percent.  Not even 1 percent.  It was so low it was measured in parts per million!  About 365 ppm (parts per million), I found, and increasing by about 2 ppm per year.  The measurements went back to 1959, and even more amazing to me, this 365 ppm was on a bone-dry basis.  That means, the air sample was desiccated, or dried very thoroughly to remove any water vapor before quantifying the CO2 amount.  That is actually a good practice, because it eliminates any variations due to changes in air humidity.  But on a practical basis, if the atmosphere contains very much water vapor, then the actual CO2 concentration will be somewhat less.  I worked it out, and for air in the tropics at 80 degrees F and 90 percent humidity, air contains about 0.2 pounds of water per pound of dry air.  Roughly, 17 percent of the air we breathe in is water vapor.  That, then, would reduce the CO2 concentration also by 17 percent, so that 365 ppm was actually about 310 ppm.  Wow.  Could CO2 at 310 ppm be causing all that trouble?  I had to see how this worked. 



It didn’t take me long, looking around on the internet to find that scientists were claiming that CO2 absorbs heat from the Earth’s surface, and re-radiates about half of that heat back down to Earth.   The effect was termed the “greenhouse” effect, and CO2 was labeled a “greenhouse” gas.  Hmmm…that’s not how greenhouses work, I knew, because we had designed and built greenhouses, too.  Greenhouses stay warm just like a car stays warm when the windows are rolled up.  Heat from the sun passes through the glass, is absorbed by the dark green plants, and heats up the air inside the greenhouse.  Winds cannot blow the warm air away because the glass walls and roof are present.  In engineering terms, there is no convection heat transfer.   Well, this “greenhouse” gas may be a term of art, and I have certainly encountered many such terms of art in engineering, and in the law.  More investigation was clearly needed.

So, I found the Kyoto Protocol, which had a lot to say about greenhouse gases.  Turns out, there are six of them, not just CO2.  The Kyoto Six included CO2 of course, plus methane, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, and perfluorocarbons.  

Then, how did the CO2 and others absorb heat and re-radiate it back to Earth?  And, how did that create a warming?  Some things already did not add up, such as increasing CO2 since 1959 but the temperature cooling from 1940 to 1975.  More on that, later.

More investigation led me to Anthony Watts’ blog, Watts Up With That.  It appeared to be a place where articles were posted that questioned the orthodoxy of man-made, CO2-caused global warming.  So, I read and read about Al Gore and his movie An Inconvenient Truth, Dr. James Hansen of NASA who creates the world’s temperature chart, and several other figures.  There was something called “The Team” and I did not know who was on the team, and what game they played.  From the context, though, being a member of The Team seemed like not a good thing, as the term was used somewhat disparagingly. 

But, back to CO2 and heating the Earth.  I read about CO2 absorbing heat.  Some small bells went off way deep in my memory.  I had heard about this somewhere, a long time ago.  I pulled out my ancient handbook of Chemical Engineering, known as Perry’s Handbook.  That book is full of rock-solid, never-wrong science and engineering facts.  If it was in Perry’s, it was a fact.  It could be relied upon.  Chemical engineers use the principles and knowledge found in Perry’s every day, around the world.  So, I looked in Perry’s.  And I found it.  Under Heat Transfer, sub-heading Radiative Heat Transfer, furnace design.  Furnaces usually burn some form of fossil fuel, perhaps coal, or oil, or natural gas, or a mixture of light hydrocarbons if the furnace is in an oil refinery.  Home heating furnaces burn a medium oil similar to diesel fuel, and there are millions of these around the world.  There are a similar number of large, industrial furnaces and boilers in power plants, factories, steel mills, refineries, and chemical plants.  Furnace design is a very mature art, having been practiced and perfected over not just decades, but centuries.  Even railroad locomotives burned wood or coal, and have done so for centuries.  And, sure enough, one of the correction factors that must be included in the furnace design is the effect of CO2 in the combustion gases.  Water vapor also must be accounted for, and its effect is even greater than that of CO2. 

A bit of an aside is in order, here.  When a fossil fuel is burned, a chemical reaction occurs that gives off a great quantity of heat.  A fossil fuel is a hydrocarbon, meaning most of the molecules are either hydrogen, or carbon.  Hence, Hydro-Carbon.  Chemists are not very inventive when naming things, sometimes.  Air is added, and heat, and the oxygen in the air reacts chemically with the carbon, and with the hydrogen.  One carbon atom combines with two oxygen atoms in the air, to form CO2, Carbon Dioxide – meaning one Carbon, and two Oxygens.  Again, not very creative naming.   Similarly, two Hydrogen atoms react with one oxygen atom to form H2O, Di-hydrogen Monoxide, more commonly known as Water.   The water is in the gaseous state, so it is water vapor.   This is important in furnace design, because what flows out of a furnace’s exhaust stack is mostly nitrogen from the air fed into the furnace, very little oxygen because most of it is reacted, and the rest is water vapor and CO2.    How much CO2, and how much water vapor?  Is the CO2 a very low concentration, like that in the atmosphere?  Turns out the answer is no, CO2 is on the order of 19 percent, and water vapor is also about 29 percent.    This is very, very different from the concentration in the atmosphere.   Stated another way, 19 percent is the same as 190,000 ppm.  So, the concentration of CO2 is much, much higher in a furnace.  What else is different?  For one thing, the temperatures are much different.  CO2 in a furnace is glowing, white-hot.  Its temperature is on the order of 1800 degrees F.  Yet, in the atmosphere, CO2 is on the order of 90 degrees F down to - 40 degrees F.  I wondered if that made a difference, and if it did, how much?    The basic answer was that yes, CO2 and water vapor each absorb radiant heat and re-emit that radiant heat.  So, there appeared to be some valid basis for the scientists’ claims that CO2 absorbs heat.  But still, I wondered. 

I then read some more in the scientific literature and found that CO2 only absorbs a small fraction of the radiant heat leaving the Earth’s surface.  A very, very small portion.  Not only a very small portion, but the effect of adding more CO2 to the air has a diminishing effect on how much radiant energy is absorbed.    In effect, the atmosphere’s radiant absorption is the same whether CO2 is present, or not.  The effect is further diluted because water vapor also absorbs radiant energy at the same wavelengths as does CO2.   Water vapor also absorbs radiant energy at other wavelengths, but there is an overlap with CO2.

So, I sat back and pondered over all this, gave it a good thinking through.  The Earth, as I knew, cooled off rapidly and substantially in the clear desert nights, even in the heat of summer.  The cold desert nights are attributed to the very dry air, that is, almost zero water vapor.  This effect is pretty amazing, and one can actually see ice form in a shallow pan at night in the desert when the surrounding air is above freezing.   This is a favorite event for Scouts who go desert camping.  One takes a shallow pan, such as a brownie pan, puts about a quarter inch of fresh water in the pan, and sets the pan down on some insulating material such as Styrofoam or cardboard.  We wrapped a dry towel around the sides, too, to keep the air from warming the pan.  Sure enough, just before sunrise, we checked and there was a layer of ice on the water surface.  Enough heat had escaped from the water via radiation into the black sky above, unimpeded by CO2 or water vapor, to allow the water to chill and create ice.  The ice water was great for filling canteens. 

Yet, one cannot do this at night in a humid climate, such as Houston, Texas where I grew up and also did some camping.  The water vapor in the air, even on a clear night, prevents this. 

So, I wondered and pondered the entire question of CO2 absorbing heat in the atmosphere.  First, the Perry’s handbook made mention of a most important parameter, the “mean beam length,” or MBL.   This refers to the distance from the hot CO2 and water vapor gases to the furnace tubes where the liquid is contained that must be heated.  The greater the distance, the less impact the radiant energy has.  This is rather obvious from everyday experience, also, if anyone has ever built a campfire or lit a candle.  Closer to the flame is much hotter, and far away from the flame is much cooler.  This is common knowledge, except among very young children.  This is also well-known from the planets in the solar system, with planets closest to the sun being very hot such as Mercury, and those farther away growing colder and colder.  Yet, the Sun’s surface is the same temperature.  Clearly, distance has something to do with the amount of radiant energy absorbed.   I wondered just how much energy CO2 could absorb at altitudes of 10,000 feet, 20,000 feet, 30,000 feet, and higher.  Also, as the atmosphere grows thinner and thinner with altitude, I wondered how many CO2 molecules are present at each altitude to absorb whatever heat energy happens to be passing through. 

It was rather obvious that even ancient man knew some of these basic facts, as references to “the cold stars” are common in literature.  Yet, we now know that stars are in fact suns, and some of them are much bigger and far hotter than our sun.  We cannot feel the heat from them, due to the very great distances measured in billions of miles, if not trillions.  Far away means very cold.  Up close means very hot.    

I had not yet formed a conclusion, a firm opinion, on all the scientific claims of CO2 causing the earth to warm, but it was looking pretty shaky to me.  Then I considered my engineering background in process control, and kept thinking about campfires.   The Law of the Campfire is simple, and was stated briefly above:  if you are too hot, move back.  If you are too cold, move closer.   Closer is hotter, every time, for a campfire that has constant heat output. 

And yet, I had seen the chart that showed CO2 was slowly rising, a nice smooth curve.  At the same time, the average temperature for the entire earth had peaked in about 1940 and decreased for 35 years.  Then, the trend reversed, and the earth started warming again.  That is impossible, if CO2 is what is causing the warming.  For CO2 to cause the earth to cool for 35 years, then warm again for the next 35 violates the fundamentals of process control.  A noted PhD chemical engineer, Dr. Pierre Latour, wrote on this same subject in a familiar magazine, Hydrocarbon Processing.  I had my own blog by then, and wrote an article discussing Dr. Latour’s writings.  For CO2 to allow cooling then warming, would be like moving your chair closer to the campfire to cool down on some occasions, but moving away to cool down on other occasions.  I knew right then that CO2 could not do what the scientists say it does.  Not at those low concentrations in the atmosphere, and not at those low temperatures.  But, I wanted to look further, so I kept reading and questioning.

About that time, November of 2009, the Climategate scandal broke when thousands of emails and computer files were released into the internet.  The files were incredibly damning, and damaging to the climate warmists’ cause because they revealed improper actions by some of the scientists at the heart of the climate debate.   In damage-control mode, the scientists at the Hadley Center’s Climate Research Unit of the University of East Anglia, UK, chose to release some of their files on temperature records for about 1,000 locations around the world.  The intent was to show that there was nothing to hide, and in a good faith effort, here was the raw data for all the world to see.   I copied the files onto my computer and had a look. 

First, the so-called raw data was anything but raw.  I know raw data, having acquired reams and reams of raw data as a practicing chemical engineer, in refineries and chemical plants all  over the world over more than 20 years.  What HadCRU (Hadley Climate Research Unit) had released was processed data.  Their release showed the average monthly temperature over a period of several decades for the chosen cities.  The monthly average was created from the daily average temperature.  The daily average was created by averaging the high and low temperature for the day.  The high and low temperature were each sometimes adjusted, or fudged, by accounting for the time of day for that temperature reading.   Also, there was no indication of how missing data was replaced or created.  Instruments are not 100 percent reliable, and sometimes require attention.  They may require cleaning, calibration, parts replacement, or other servicing.  They may be out of service for a period while someone notices the data is missing and fixes the instrument. 

Yet, here was a data set of monthly averages for about a thousand cities.  I decided to look at what was there, for the USA.  There were 87 records, all in the lower 48 states.  The data were for cities all across the USA, not in every state, but in most states, and were fairly evenly distributed.  Some were in great cities like New York, Los Angeles, San Francisco, Miami, some in mid-sized cities like St. Louis, Spokane, and Fresno.  Others were in small cities or large towns, like Abilene, Texas, and Meridian, Mississippi.    I loaded the data for each city into a popular spreadsheet and made graphs of the monthly temperature versus time.  I included a moving average to see what trends were apparent, if any, then added a linear best-fit trend line.  The results were so fascinating that I uploaded all the graphs onto my blog, with some commentary.  What I found confirmed what I had suspected all along. CO2 cannot do what the scientists claim it does.

What the graphs showed was a terrible inconsistency in the warming of cities in the USA.  Some cities did, indeed, show a pronounced warming trend over a roughly 100 year period from 1900 to 2009.   Not all cities had data that extended that far back, though, as only 62 had long-term records.   Yet, other cities among those 62 had cooling trends, or neutral trends.  That seemed odd, because if CO2 was truly warming the earth, then it must be warming every part and not being selective about what to warm and what to ignore.  This is especially true for adjacent cities, or those not separated by great distances north to south.  There would be some difference, supposedly, for cities in the far north compared to those near the equator.  But, the USA is only about 1500 miles from north to south in the lower 48 states.     For the earth’s total distance from pole to pole of approximately 12,000 miles, that is barely more than 10 percent.   For cities that are only one or two hundred miles apart, it seemed very odd to me that CO2 would ignore one and focus its heating rays on the other.  Physics does not work that way.  If a phenomenon is truly a physical effect, it works consistently and equally at all times and places.  Gravity, for instance, has the same downward force in Houston, Texas as it does in Mumbai, India, or Bora Bora.     One can imagine the confusion if travelers had to adjust to different gravity effects depending on what city their plane had landed in.   No, physics does not play whimsical games like that.

Or, one could imagine how chaos would reign if the properties of steel were capricious, like CO2.  Engineers might be building a bridge in Cairo, Egypt, and require hefty steel beams 12 inches wide and 24 inches deep.  But, an identical bridge across the Mediterranean in Rome would require lighter beams of only 4 inches width and 12 inches deep.  Engineers will laugh at this, because that simply does not happen.  A given grade and quality of steel will hold up the identical weight, no matter where in the world it is used.  Purists will note that this is not strictly true, as steel is somewhat affected by temperature.  But, for most purposes, bridges do not get hot enough to weaken the steel noticeably. 

At this point, I looked at adjacent cities and noted that some cities, as I wrote above, showed a cooling or neutral trend.  Abilene, Texas, and Shreveport, Louisiana are two of those.   Abilene shows a slight cooling of 0.19 degrees C per century, while Shreveport shows a very slight cooling of 0.01degrees C per century; essentially no change at all. These cities are only approximately 250 miles apart, east to west.  They are at essentially the same latitude.   At the same time, St. Louis, Missouri, shows a warming of 1 degree C per century.  St. Louis is only approximately 300 miles north and a bit east of Shreveport, and approximately 400 miles from Abilene.    Clearly, something is amiss in the CO2-causes-global-warming science.  How could CO2 know to ignore Shreveport, but focus its beams on St. Louis? 





Another example came to my attention: San Francisco, California, and its neighboring city, Sacramento.  These cities are separated by only about 50 miles, and nearly at the same latitude.  Yet, San Francisco had a warming of 1.5 degrees C per century, while Sacramento cooled by 0.3 degrees C per century. 

One possibility that explains the heating versus cooling or no trend is what I learned was called the Urban Heat Island effect, or UHI.   At first I thought this referred to the University of Hawaii until I finally found what the acronym spelled out.   UHI is a phenomenon that causes cities, or large urban areas, to be hotter during the day, and warmer during the night, compared to more rural areas nearby.  The UHI effect is small for small cities, but grows larger for large cities.  The UHI is due to several factors, including expanses of asphalt and concrete paving, stone or brick or glass-and-steel buildings, great consumption of electricity to heat or cool the buildings, industrial heat from factories and other heavy industries, and large numbers of cars, trucks, buses, and airplanes that consume great quantities of fossil fuel. 
But, even UHI has problems.  For example, Meridian, Mississippi is a small town and it is warming at the identical rate as the large city in Texas, San Antonio.  Both show a modest warming of 0.26 degrees C per century.  How can that be, if UHI is important?

Other small cities show substantial warming, such as Helena, Montana and Duluth, Minnesota, at 2 degrees C per century.    Duluth’s population hasn’t changed much from around 80,000 people since 1930.  It reached 107,000 in 1960 but has been decreasing since then.  Helena has grown from about 12,000 in 1910 to 28,000 in 2010.  

I want to turn briefly to the amazing small town of Eureka, California.  I have never been there, but it is on the coast in northern California between San Francisco and the Oregon border.   When the winter Olympics are held near 2075, Eureka should put in a bid as the host city.  It will soon be covered in snow year-round, and may have a localized ice age if the present cooling trend does not reverse.   Eureka has, starting in about 1990, had a cooling trend of approximately 15 degrees C per century.   Its average temperature currently is about 10 degrees C, so in 65 years the average temperature will be zero C.   One can only wonder why CO2 has ignored the small town of Eureka.  If any town needed some global warming, it would be Eureka. 



Finally, my attention was turned to a published study by Dr. James Goodridge, the former state climatologist for California, now retired.  His work showed that California’s counties could be grouped in three groups according to population, and the average temperature trend for each group computed.  He found that counties with large populations showed a distinct increasing temperature over 80 years, while those with small populations showed essentially no warming at all.  The mid-sized counties showed an intermediate amount of warming.   One must seriously question how CO2 did that, in a state as geographically large and diverse as California.  The large population counties are typically on the coast, with the cities of San Diego, Los Angeles, and San Francisco.  Small population counties are all across the state, including on the remaining coastal areas.    It is highly unlikely that CO2 is smart enough to pick and choose which counties in California will receive its warming beams, and which counties will be ignored by CO2.

  

To summarize the journey to this point, then, I found that scientists claim the earth is warming at an alarming rate, but there was a previous warming of equal magnitude and duration (1910-1940) during a period when atmospheric CO2 was at low concentration.  Also, the earth stopped warming and cooled a bit from 1940 to 1975, then started the warming again.  For CO2 to cause a warming, then a cooling, then a warming again is impossible and violates the fundamentals not only of physics but of process control.  Finally, CO2 ignores completely some cities in the USA, indeed entire counties in California, while warming adjacent cities and counties with large populations at an alarming rate.   CO2 is a simple molecule with one carbon atom and two oxygen atoms, and cannot possibly be that smart.

All the above was more than enough to convince me that the threat man-made global warming is false, it is a hollow threat, and has zero substance.   Yet, if one reads the policy summaries and scientific studies, the premise is that CO2 causes warming and more CO2 causes more and faster warming.  All else follows from that failed premise.

Still, there is more to the story.  I want to describe what I found when I looked at the temperature record itself, the one that shows a warming from 1910 to 1940, a cooling from then until 1975, then a warming again until about 2000.   To preface this, it is important to know that in engineering data collection and analysis, indeed in any scientific data collection and analysis, it is only rarely appropriate to go back and change one’s data.  It is extremely inappropriate to change one’s data over and over again.   That requires some explanation.

The question is, how accurate is any data?   The data here, for climate purposes, and whether the Earth is warming or cooling, is temperature data.  Temperature data can be obtained very accurately and very precisely with modern technology.   Not to get too technical with this, but accuracy and precision are not the same thing.  Accuracy can be considered as how close to the truth is the measurement, while precision is the measure of how many decimal places are believable in the measurement.  In the early days of thermometers, it was difficult to calibrate them and also difficult to obtain a reading within half a degree.  Thus, the thermometer may have read 80 degrees on a fine sunny day but due to mis-calibration, the actual temperature was only 78 degrees.   What is not known in many cases is when a thermometer broke and was replaced.   It is also not known if the replacement thermometer was calibrated to read the same as the earlier one.    Finally, if different observers read the thermometer, one may have judged the reading to be 70.5 degrees, while another would read it to be 70 degrees.   These seem like small differences, and they are.  However, the entire warming over the past century was said to be only 0.7 degrees C, or roughly 1.2 degrees F.

My attention was called to some amazing work by E. M. (Michael) Smith, who runs a blog titled chiefio.wordpress.com.   The pertinent portions of his blog entries are known as The March of the Thermometers.  Michael is rather a whizz at computer programming and data analysis.  He accessed the publicly-available massive data and computer code used at NASA by Dr. James Hansen, known as the GISS code.  I believe GISS stands for Goddard Institute for Space Studies.    Michael unraveled the code, and wrote of his findings in several postings.  The key findings were that the code re-writes the past data each time it is run.  Also, the code makes questionable choices in how missing data is treated, and how discontinuous data is spliced together.   Another and, to me, most important finding was that Hansen deleted major portions of the temperature measurement stations in recent years.  That does not appear to be random but perhaps (likely?) was chosen in a way to show much more warming in recent years.   In effect, the temperature trend that results from NASA GISS is false.    I highly recommend that anyone who is curious about the temperature history of the last 120 years or so read what E.M. Smith wrote about it.   The past data is not only changed, it is changed frequently.   If missing data is discovered, the computer code simply reaches out to an adjacent station that can be 1200 kilometers distant (about 700 miles!!!) and uses that data. 

Then, the entire system spits out a global average temperature based on anomalies from thousands of measuring stations around the world.   Anomalies are another area for creating great mischief.   The problem lies in having some cities in cold locations, and others in warmer locations.   What climate scientists do is assign each month an average temperature, based on some pre-determined period of about 30 years.  Some use 50 years, though, for reasons not clear to me.  Further, the base period is not the same but is updated every ten years or so.  Again, moving targets with constantly-adjusted data.   It reminds me of the ancient three-shell game with the pea, where the mark tries to keep his eye on the shell that has the pea under it while the con-man shuffles them all around.  

The use of anomalies supposedly allows one to merge or blend temperature trends together without concern over where the average temperature was 5 degrees (Alaska) or 25 degrees (Bora Bora).   Still, it is quite disconcerting to see yet another opportunity for data manipulation.    A better method, in my view, is to obtain the data trend for each decade, for each station.  If the station showed a warming trend of 0.1 degree per decade, then that 0.1 goes into the averaging pot.  There would be no anomalies, no base periods, no changing base periods every 10 years, just a simple, one-time calculation of the decadal trend.   That decadal trend would then be golden and not subject to change.

Another very disconcerting revelation was the excellent work by Anthony Watts, who was mentioned earlier in connection with his blog, Watts Up With That.   Anthony also performed a heroic task in assessing the vast majority of the USA’s climate measuring stations.  His assessment focused on how well or how poorly each station was situated, or sited, according to the existing guidelines.  For example, a well-sited station must be a certain distance from trees, from buildings or other structures, must be placed at the correct height over a grass area, and not be subjected to artificial heating or cooling measures such as an air conditioning exhaust.   What Anthony found was appalling.  He wrote up his findings along with several co-authors and had a paper published in 2011 (summary here).  Many stations ranked as the poorest rating, and only a few had the best rating.  Some were indeed mounted next to brick walls, on asphalt parking lots, next to air conditioner condensers, next to barbecue pits, on asphalt rooftops, at airports where they are heated by massive runways and jet exhaust, and other unacceptable locations.  These are the sources of the temperature records for the USA, which is supposed to be the best and most accurate of any country in the world.   Where the siting conditions become important is how the temperatures are affected over time, over a period of years.   A rural setting would likely show lower readings in the early years, but warmer and warmer readings as buildings are built, roads are installed, parking lots are installed, and other such things.  Thus, part of what Anthony did was determine how much of any demonstrated warming trend was due to siting changes.

There are a few more points, and I will finish.

First, sea levels are not rising and oceans are not getting hotter.   This alone disproves the entire CO2-induced global warming nonsense.  By the warmists’ belief, the oceans must grow warmer, and the sea levels must increase as CO2 increases.  Neither is happening. In fact, the opposite is happening.  NASA and NOAA’s own data show this quite clearly.  The chart below is from U. Colorado, and is based on NASA's satellites that measure sea level.   Note on the chart the dramatic decrease in trend starting in about 2005, and the sudden decrease in sea level in early 2010.





First-part B, Sierra Nevada snowpack and snow-water-equivalent (SWE) have not changed significantly in almost 100 years.  Dr. John Christy of University of Alabama, Huntsville, published a paper on this in 2010.  His data ended in 2009.  Since then, there have been near-record snowfalls in the Sierras.  His key graph is shown below, normalized to show deviation from the average.  From his paper, HL refers to a key snow measuring station, Huntington Lake.   The paper is at this link.




Second, a recent peer-reviewed paper from CERN shows that clouds and sunspots are related, with the 20th century having great sunspot activity, but the little ice age had few to zero sunspots.  Recently, in the last 4 years or so, our sun has again gone suddenly very, very quiet and it is getting cooler.  The lack of sunspots came as a complete surprise to scientists all around the globe.  The relationship is that more sunspots equals hotter climate.  The mechanism is that the sun’s magnetic field is immense, and grows larger and more intense as sunspot activity increases.  The magnetic field shields the Earth from Galactic Cosmic Rays, GCRs.  However, when GCRs hit the atmosphere, they create cloud nucleation particles and more clouds form.  More clouds reflect more sunlight away from the Earth, and cooling occurs.   Once again, more evidence that the science is not settled.  Heck, they cannot even predict how many sunspots, nor when they will occur.   The CERN experiment and published paper was only a few weeks ago. 

Third, climate models cannot agree, and their projections do not match the satellite measurements.   This shows that the science is far from settled, and when a model fails to match the measured data, the models are wrong and must be scrapped or improved if possible.  A very recent paper by Spencer and Braswell has caused an uproar in the climate community because it shows very clearly that the climate models are far off in their predictions.  The satellite data doesn’t match the models.

Fourth, hurricanes are not growing more intense and in greater numbers.   In fact, hurricane energy is at a historic low for the entire period of satellites.  Meanwhile, CO2 continues to increase.  Another busted prediction, proving their ideas are totally nonsense.  The chart below, from Dr. Ryan Maue of Florida State University, shows the current status of the world's tropical cyclones measured as Accumulated Cyclone Energy from 1972 until today. The top line is the total for the world, the bottom line is for the Northern Hemisphere.   The total cyclone energy is back to what it was in the middle 1970s, and meanwhile CO2 continues to rise.



Fifth, and finally, the only prediction the warmists have correctly made is the continual reduction in the Arctic ice cap.  However, they have the cause and effect completely wrong, for the following reasons.  First, the warmists maintain that a shrinking ice cap is strong evidence that the Arctic area is warming, and that warming is due to the heat rays beamed down by CO2 in the atmosphere.   In reality, ice acts as an insulator and prevents heat from being released from the ocean into the night sky via radiation.  Ice acts in a similar way on lakes, it keeps the lake from freezing solid unless the lake is very shallow.  The growing and retreating Arctic ice acts as a negative feedback on the ocean’s heat content.  When the oceans are warm, the ice begins to melt at the edges.  There is thus more open water that loses heat due to radiation.  The ice extent is at a minimum usually around mid-September, which allows great amounts of heat loss in the long polar nights.  The oceans then cool, which eventually cools the air, and allows more ice to form in future years.  The system oscillates then between more ice and less ice, with the ocean temperature and heat content also oscillating but slightly out of phase. 

In conclusion, if anyone still believes that CO2 does what scientists claim it does, I suggest you think about that the next time you are at a campfire, or near a candle, or any other fairly stable heat source.  Move toward it, then move back.  Also, find a nice masonry wall that has ample sunshine on it.  Just after dusk, when the sun is no longer shining on the wall, place your hand on the wall and feel its warmth.  Then, move slowly away from the wall and see how long you can continue to feel the warmth.  Think about that little CO2 molecule, having to also feel that warmth, get all excited, absorb the heat, and then re-radiate the heat back out again.    Remember that scientists insist that the Earth warmed from 1910 to 1940 - yet CO2 was very, very low.  

Also, have a look at Anthony's blog and E.M. Smith's blog.   Think about this: if the science is settled and we must act now or lose the Earth's future to a hell of warming, rising oceans, monster hurricanes and all the rest, why did the CERN experiment show that clouds are far more important than CO2?  Why do the satellite temperature measurements show the models' predictions are all wrong?  Why has nothing ever panned out for the climate warmists?  The only thing they can point to is the declining ice in the Arctic, but as I discussed above, they have that completely wrong. 

Finally, have a look at the temperature graphs of the USA's cities on SLB.  See for yourself how many, many cities have zero warming or a slight cooling.  Then, ask yourself how can that be?  How did CO2 get so smart that it can selectively ignore some cities?

CO2 is innocent.  It always has been, and always will be. 

Roger E. Sowell, Esq.  

Monday, July 25, 2011

CARB Cuts AB 32 by Half

It's a momentous time in California, with the Air Resources Board (ARB or CARB) just announcing reduced targets for CO2 emissions under the Global Warming Solutions Act of 2006, aka AB 32. see this link. The short version is that ARB have cut the required reductions approximately in half. The reasons cited are 1) reduced economic activity in California, and 2) some federal laws that were not in place in 2008 now require similar reductions, thus, it would be double-dipping to count those.

Some background, and the numbers, California's AB 32 requires the state to reduce "greenhouse gas" emissions to 427 million tonnes per year of CO2-equivalent, by the year 2020. The CO2-equivalent (CO2-e) allows non-CO2 gases to be converted and counted as if they were CO2. The 427 million tonnes is what ARB calculated were emitted during 1990 - and it's really just an educated guess. No one really knows how much was emitted in 1990. Before the recent announcement, ARB had estimated that in 2020, California would emit approximately 600 million tonnes CO2-e without AB 32. This is the BAU (business as usual) case. The difference, 600 minus 427, is 173 which must be reduced by a long list of items that make up the AB 32 Scoping Plan.

I am not trying to take any credit for the reduction that ARB has just announced, however, in December 2008, I did write a letter to ARB's chairperson and stated that it was not accurate for AB 32 to claim credit for federal laws already on the books. see this link for the letter. Those reductions would occur even without AB 32. Of course, I received no reply to my letter. One particular item I wrote about was reduced emissions due to more-efficient cars, which in California is known as the Pavley standards. The federal law recently adopted most of the Pavley standards.

ARB's new target for reductions by 2020 is about half of the previous target, with 80 million tonnes CO2-e. ARB states that the deep and prolonged recession has reduced some of the CO2-e already.

We can all stay tuned, as California's economy worsens still more. At the current rate of collapse, the target 427 million tonnes CO2-e will be met entirely by economic recession in about, let's see, four more years. Call it 2016.

Roger E. Sowell, Esq.

Chinese Nuclear Power Plant Costs

This post was prompted by something I've seen written many times on various blogs and news reports for the past couple of years, that new nuclear power plants are NOT expensive. In fact, they say, China is building dozens of them for about $2 billion (US dollars) per reactor, where the reactor produces 1000 MW. I have my doubts about the $2 billion per reactor, (which is $2,000 per kW) as those who read and follow SLB are probably aware. In the USA, some recently-published numbers for proposed new nuclear power plant projects are more like $8,000 per kW. As an example, the now-defunct South Texas Nuclear Power Plant Expansion was to have two reactors at 1100 MW each, with a published cost estimate of $17 billion. That works out to $7,730 per kW, but it also ignores the inevitable cost over-runs, and extra interest costs for long delays. I would be surprised if that STNP Expansion would be built for less than $25 billion or roughly $12,000 per kW.

Therefore, I was quite interested to read a news item today, regarding a large new nuclear power plant under construction in southern China. The plant will have six reactors, at 1000 MW each. Total cost should be $12 billion, using the $2,000 per kW figure I've seen bandied about. Yet, CLP Holdings, LTD, purchased a 17 percent interest in the plant for $11 billion. CLP is a utility company in Hong Kong. CLP's 17 percent represents roughly the output from one-sixth of the entire plant, or one reactor. If 17 percent of the plant is worth $11 billion, then the entire plant is worth approximately $64 billion. That works out to a bit more than $10,000 per kW. That is much more in line with what new nuclear plants are projected to cost in the USA.

Roger E. Sowell, Esq.

Saturday, July 23, 2011

Nuclear Plants Delayed Again

More news this week from the dismal world of building a new nuclear power plant. As if the AREVA-designed project in Finland is not having enough troubles, now the same design is having serious problems and delays in France, at Flamanville. (on the Normandy coast near the English channel). see this link for one of several stories.

New nuclear power plants are routinely plagued by costly delays, and cost over-runs. The recent news states a two-year delay, from 2014 to 2016, and a cost over-run of 1 billion Euros (from 5 billion up to 6 billion). As always with these monstrosities, it is very likely that neither target will be met. Startup will likely be later than 2016, and the final cost much more. How much more, it is difficult to say.

In a perfect world, governments would require each nuclear power plant to be a self-contained business entity, responsible for its own profits and losses. If this were the case, the true costs of nuclear power would be transparent and available for all to see. Would the new reactor in Finland sell power for 3 cents per kWh, as so many pro-nuclear advocates insist is the true cost of nuclear power? That is very unlikely, since approximately 25 to 3o cents per kWh is required just to pay off the capital costs, and the operating costs. How about the new reactor at Flamanville? Same thing holds true.

In the USA, the South Texas Nuclear Project Expansion has been scrapped, which is a shame actually. It would have been very instructive to have that project proceed, with massive cost over-runs, and lengthy schedule delays so that the true cost of nuclear power from it would be at least 30 cents per kWh. In a world literally running over with natural gas at $4 per million Btu, and technology easily available to build efficient Combined Cycle Gas Turbine power plants that produce almost 60 units of electrical power for each unit of natural gas input, 30 cents per kWh puts nuclear power plants out of the running.

Still, there are a couple of other candidates for demonstrating the nuttiness of new nuclear power plants in the USA, in particular the Vogtle proposed plant. Perhaps it will be the new poster-boy for why the USA cannot afford any more nuclear power plants, and inflict high utility bills on the good customers in the South.

Roger E. Sowell, Esq.

California's AB 32 Jobs Still Absent

From my earlier posts on SLB regarding AB 32, California's Global Warming Solutions Act of 2006 (Nunez), it is clear that I hold a dim view of the law, the necessity for the law, the so-called scientific basis for the law, and its effect on the state's economy. This post is an update on the last aspect, the effect on the state's economy. For some perspective, AB 32 has a multitude of components, with more than 70 separate line items in the Scoping Plan. It is generally stated by the media, and even some within the Air Resources Board (ARB or CARB), that AB 32 will not be implemented until January, 2012. That is misleading at the best, and an outright false statement at the worst. AB 32 has a number of line items already in place, in fact, there are "Early Action Items" listed prominently on ARB's website. However, some of the line items will be in force next January, while one rather large piece has been delayed until at least 2013. The big delay is for Cap and Trade. More on that a bit later.

AB 32 was (or is) supposed to change California's emissions of CO2 and a few other so-called "greenhouse gases" by reducing those emissions according to a timetable. The initial reduction and time-target was down to 1990 levels by 2020. This means that, on an absolute tons emitted per year basis, by 2020 California would emit the same amount as was emitted in 1990. In practice, that requires approximately a 30 percent reduction compared to the "business-as-usual" case. CARB uses the abbreviation BAU for business-as-usual. An additional target was then set by the Governor to 80 percent below the 1990 level by the year 2050. Stated another way, California in 2050 can only emit 20 percent of what it emitted in 1990. After allowing for economic growth and population growth, the "80 by 50" requirement actually requires more than a 90 percent reduction in CO2 emissions compared to the BAU for 2050.

As I've written elsewhere on SLB, expecting to achieve this is quite absurd. The "80 by 50" requirement is absolutely a death-knell for California's economy. No economy in modern times (or ancient times, for that matter) has ever demonstrated an ability to conduct commerce, transportation, supply reliable and affordable energy (i.e. electricity), produce agricultural crops, produce and deliver clean water, collect and dispose of waste, and all the other aspects of a large and diverse economy with such a low CO2 output. None. But, CARB and the California government have the utmost faith that it will be done. They have some vague notions that fossil fuel-fired power plants will have the CO2 captured and sequestered, that cars, trucks, and buses will run just fine on bio-fuels or hydrogen or electricity, and a great portion of electricity supply will be from renewable sources such as wind and solar. They have grand plans for each citizen to conserve and reduce electric power consumption by some vague means, and by a "smart grid" that will reduce power consumption even more.

So much for the basics.

All of AB 32's requirements are supposed to be technically feasible, and are touted as creating jobs for California's economy. With January 2012 less than six months away, it is time to look for those jobs. Supposedly, California companies are producing bio-fuels, for example. Solar panels are another big requirement, and the jobs to manufacture and install them. Smart grid components and the installers for them is another item. The list of AB 32 items and the jobs they are supposed to create is long. Yet, the most ridiculous of the jobs-related aspects is what ARB stated in the beginning: each Californian will have approximately $250 per year of extra, disposable income as a result of AB 32. That works out to approximately $5 per week, which is enough to buy a cup of premium coffee each week. The additional sales of coffee will create great numbers of jobs in the retail sector.

The reality is that California is leading the entire nation in unemployment rate - with the sole exception of Nevada. The recent figures for June, 2011 are now public and show California with 11.8 percent. (Nevada is at 12.4 percent). So, the question remains unanswered, where are the AB 32-related jobs in California? Only six months from now, nearly all of the 70-plus line items are to be in place. Will millions of new jobs magically appear on January 1, 2012? Will coffee baristas be in short supply, so that most of the just-graduated teens can find employment? Somehow, that seems rather unlikely.

Roger E. Sowell, Esq.