Subtitle: Renewable Wind and Solar Reach 31 Percent - Grid Fine
From time to time, actually quite commonly, certain anti-renewable activists write that the electrical power grid will cease to function, or will incur outrageous costs to operate, if "unreliable" renewable energy sources exceed a given threshold. Typically, they agonize over the 30 percent level of wind combined with solar power. Yet, as the chart at right shows, the California grid manages quite well with wind and solar at more than 30 percent. A recent day (Christmas day 12/25/2015) was sunny and breezy with the result that for several hours that day, wind and solar combined for 30 to 31 percent of the total supply to the state's grid. The data is from CAISO's website, see link. ("Other Ren"-ewables is the category that includes small hydroelectric, geothermal, bio-gas, and bio-mass sources of electrical production).
CAISO is the California Independent System Operator, who operates the main transmission system for electricity in California. It's a big grid, reaching from the Oregon border all the way to Mexico, and powers the country's most populous state with a bit more than 38 million people. The grid demand (today's information) was a low of 21,000 MW at about 3 a.m., and peaked at just more than 30,000 MW at 6 p.m. The grid reaches a summer maximum of approximately 50,000 MW. The grid is currently supplied by a mix of generation technologies, nuclear (two reactors that provide 2,100 MW - when they are running), natural gas (a combination of steam plants, combined-cycle plants, plus a few peaking gas turbines), hydroelectric (from in-state dams), and imported power from other states (a mix of coal-based power from Utah, nuclear from Arizona, hydroelectric from Hoover Dam in Nevada/Arizona border, and wind energy from northern states), and finally the renewable energy sources. These include wind turbines, solar both PV and thermal, geothermal, bio-gas, bio-mass, and small hydroelectric. It is notable that for California, only the wind and solar are intermittent while the other renewables are remarkably steady in output.
Therefore, it is quite obvious that California, despite being completely backwards in many ways, has managed to integrate wind and solar power into the grid at the 30 percent penetration level with few, if any, adverse effects.
It is notable that other grid operators, particularly the German grid, seem to have troubles with their renewables and their grid. Perhaps that is a function of the wind turbines, or the grid design, or other issues. However, to make blanket statements that renewables ruin a grid is simply not true.
In fact, the California grid will soon have more of both wind and solar energy as inputs as new projects continue to be built and are placed in operation. I suspect that part of the California success is not having too much nuclear power on the grid, with its unyielding requirement to run at baseload (flat out at all times), whereas California has much more tolerant gas-fired power plants to slowly increase and decrease their output as the demand requires.
This is, indeed, the model for future electrical grids: (see link). As coal as baseload plants are retired (due to environmental costs and old age) and coal availability wanes, and nuclear plants are closed due to old age and bad economics, the future grids will be supplied by both natural gas and the economic form of renewables. In the US, the renewables will be wind for the most part, and solar only in the sunniest parts of the far West and Southwest (California, Arizona, New Mexico, and parts of Nevada). However, on-shore wind turbines are being built rapidly through the country's center section from Texas to North Dakota (the great wind corridor), and the first off-shore wind turbines are now under construction.
The evidence is clear: wind and solar do not crash the grid. Not at 30 percent, and not in California. As the wise-cracking pundits might say, Your Mileage May Vary. While not everything that starts in California is worth exporting to the world, in this case there is likely an exception. The lesson is pretty clear: get rid of the coal and nuclear plants, install natural gas power plants, and install wind and solar. Grid-scale storage is in the works, too.
Update 1: Note that California policy makers, in their vast "wisdom," have established a renewables target of 33 percent averaged over a one-year period, to be accomplished by 2020. The California Public Utility Commission has this to say about it (The RPS or Renewable Portfolio Standards) on their website:
"The RPS program requires investor-owned utilities (IOUs), electric service providers, and community choice aggregators to increase procurement from eligible renewable energy resources to 33% of total procurement by 2020."
This has several implications, one of which is that renewable resources are greater during some parts of the year, and less in other parts. (Sun is stronger and shines longer in the summer, wind blows strongest typically in April-May). Therefore, to achieve a 33 percent overall target, on many days in the year, more than 33 percent renewables must be achieved.
Note, though, that the total renewables in the RPS program includes the smaller renewables in the chart above, the 8 percent "Other Ren." That would mean, for that day, the total renewables reached 39 percent during those few mid-day hours from about 11 a.m. to 4 p.m. Therefore, even 39 percent is not enough, for that is not the daily average, merely the hourly average. At mid-day times, renewables will reach approximately 50 percent or even more, for the state's annual average to achieve 33 percent.
Let's hope Germany, and the other countries with grids that are struggling, are paying close attention. -- end update 1.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 all rights reserved
Wednesday, December 30, 2015
Saturday, December 26, 2015
Not Rearden Metal - But Very Close To It
Subtitle: Magnesium with Distributed Silicon Carbide Nanoparticles
The headline from the UCLA press release reads: (see link)
"UCLA researchers create exceptionally strong and lightweight new metal"
"Magnesium infused with dense silicon carbide nanoparticles could be used for airplanes, cars, mobile electronics and more" -- end headline
Flashback to the 1957 novel by Ayn Rand, Atlas Shrugged, in which one of the main characters, Hank Rearden, invents a miracle metal that is stronger than steel and weighs much less. Here, UCLA has done exactly that.
The magnesium metal nanocomposite can be used to good advantage where strength is required but weight is a drawback. Replacing steel in cars and other vehicles, use in aircraft frames, spacecraft, probably in military applications such as missiles flying farther, mobile electronics, and medical devices are some of the likely applications. Bridges and rails for railroads, though, are not mentioned in the press release although those are primary uses for Rearden Metal in Rand's novel.
Magnesium is a very common metal on Earth, so running out is not a problem. Magnesium is a big component of ordinary seawater, as dissolved Magnesium Chloride MgCl2. Roughly, one ton (2,000 pounds) of typical seawater contains 2.4 pounds of magnesium.
Dow Chemical Company manufactures magnesium from seawater (Gulf of Mexico), as stated on their website:
"Dow first came to Texas in 1940, building a plant in Freeport to extract magnesium from seawater."
The Dow process precipitates magnesium from the seawater by addition of sodium hydroxide, NaOH, to produce Mg(OH)2. The magnesium is then converted to metallic form. Dow also has substantial chlorine-caustic production in Freeport, where the caustic is NaOH.
The breakthrough was by Lian-Yi Chen, PhD, who conducted the research as a postdoctoral scholar in Professor Xiaochun Li’s Scifacturing Laboratory at UCLA. The process is a method to adequately disperse the nanoparticles in the magnesium. Previous attempts resulted in the nanoparticles clumping together.
This is one to watch.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
The headline from the UCLA press release reads: (see link)
"UCLA researchers create exceptionally strong and lightweight new metal"
"Magnesium infused with dense silicon carbide nanoparticles could be used for airplanes, cars, mobile electronics and more" -- end headline
Flashback to the 1957 novel by Ayn Rand, Atlas Shrugged, in which one of the main characters, Hank Rearden, invents a miracle metal that is stronger than steel and weighs much less. Here, UCLA has done exactly that.
The magnesium metal nanocomposite can be used to good advantage where strength is required but weight is a drawback. Replacing steel in cars and other vehicles, use in aircraft frames, spacecraft, probably in military applications such as missiles flying farther, mobile electronics, and medical devices are some of the likely applications. Bridges and rails for railroads, though, are not mentioned in the press release although those are primary uses for Rearden Metal in Rand's novel.
Magnesium is a very common metal on Earth, so running out is not a problem. Magnesium is a big component of ordinary seawater, as dissolved Magnesium Chloride MgCl2. Roughly, one ton (2,000 pounds) of typical seawater contains 2.4 pounds of magnesium.
Dow Chemical Company manufactures magnesium from seawater (Gulf of Mexico), as stated on their website:
"Dow first came to Texas in 1940, building a plant in Freeport to extract magnesium from seawater."
The Dow process precipitates magnesium from the seawater by addition of sodium hydroxide, NaOH, to produce Mg(OH)2. The magnesium is then converted to metallic form. Dow also has substantial chlorine-caustic production in Freeport, where the caustic is NaOH.
The breakthrough was by Lian-Yi Chen, PhD, who conducted the research as a postdoctoral scholar in Professor Xiaochun Li’s Scifacturing Laboratory at UCLA. The process is a method to adequately disperse the nanoparticles in the magnesium. Previous attempts resulted in the nanoparticles clumping together.
This is one to watch.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Labels:
magnesium,
nanoparticles,
Rearden Metal,
UCLA
Tuesday, December 22, 2015
Energy Supply in Post-Coal America
Subtitle: What Will Replace Coal in 20 Years
(Note, see Update below)
One of the several themes on SLB is energy supply, as at times articles on the Grand Game appear in which various aspects of US national and international energy are discussed. As time permits, I conduct personal research into those various aspects. In general, energy supply is categorized as coal, natural gas, petroleum, hydroelectric, nuclear, wind, solar, geothermal, tidal, wave, river and ocean currents, and bio-fuels such as ethanol, bio-gas, and bio-diesel. There are a few others, too, such as municipal solid waste (MSW), waste fuel as cogeneration feed, waste treatment plant sludge conversion to methane, ocean thermal electric conversion (OTEC), and direct osmosis using fresh river water and the saline gradient into ocean water. (Update: and algae-to-oil as another bio-fuel. )
Many of these have several variations, so that the 20 categories listed above easily have 50 or more distinct types. Each has advantages, disadvantages, environmental impacts, economics, resource and land-use requirements, grid impacts, and other aspects. As an example of different grid-scale electric generating power plants, a recent study (cited in several SLB articles) by the California Energy Commission in 2009 lists 21 different technologies including baseload, peaking, and intermittent sources. ( see link. )
An earlier article on SLB (May, 2014) had the following, with respect to the world running out of coal in the 50 to 60 year time-frame: (see link to "Coal Exhaustion Looms - Renewable Energy to the Rescue")
". . . coal, that mainstay of electric power generation world-wide, is in shorter supply than I had remembered. In fact, several reputable sources now state that world reserves of coal will be exhausted in roughly 60 to 70 years - and that is if no increase in current consumption occurs. Yet, growing economies in several countries are increasing their coal consumption year-over-year. China and India are on that list. It is entirely conceivable that coal will run out in less than 50 to 60 years."
That statement is a bit vague on what reserves of coal are included, it should be improved by stating the "world economic recoverable reserves" of coal will be exhausted in roughly 60 to 70 years.
However, the US coal domestic supply and demand picture is quite a bit gloomier: the coal will run out in approximately 20 years. (see link to USGS' 2009 National Coal Resource Assessment Overview) That is, by 2035, every coal-fired power plant in the US will be out of fuel. With coal-fired power plants providing approximately 40 percent of the US electricity today, see pie-chart at right, and only 20 years in which to identify and build replacement power supplies, perhaps it is no wonder that the current federal administration is pushing coal to the sidelines and assisting renewables. The USGS shows economical recoverable reserves to be a bit more than 28 billion tons in 2009, and 1.1 billion tons annual production. Today, six years later, the reserves are at approximately 21 billion tons, and production has declined to just under 1 billion tons per year, leaving 21 divided by 1 for approximately 20 years of coal remaining.
Replacing the domestic coal-power can be via several alternatives: importing coal from overseas, increasing construction of natural gas-fired plants, building 200 nuclear plants, or increasing renewable production. Of course, a crash program to reduce electricity use would also play a role, but not a very large role. Any increased efficiencies would be offset by increased economic growth. Another possibility is by in-situ coal gasification, gas collection, cleanup, and distribution to power plants.
Importing Coal
Other countries are also running out of coal and are importing coal to run their power plants. India, China, Korea, and Japan are a few examples. Importing coal requires port and rail infrastructure to unload the ships, store the coal on shore, then load the coal into rail cars for delivery to the power plants. A major concern is security of energy supply with coal ships shuttling over the oceans.
Build Natural Gas Power Plants
The US has abundant natural gas due to advances in precision directional drilling and hydraulic fracturing in gas-bearing rock formations. Gas price is low at approximately $4 per million Btu. Combined cycle gas turbine power plants are very efficient at approximately 60 percent, and use very little water for cooling compared to coal and especially compared to nuclear plants. CCGT can also be built rapidly and are mature technology with predictable startup dates and finished costs. CCGT plants also have desirable operating characteristics of load-following or baseload operation.
Build 200 Nuclear Plants
Another option to replace coal power is to build approximately 200 nuclear power plants using the Pressurized Water Reactor design at 1,000 MW each. However, with the plants running at less than 100 percent, it is likely that at least 220 nuclear plants would be required. But, getting 220 nuclear plants through the regulatory approval process, licenses to construct issued, and building the plants so that all start up within the 20 year deadline is essentially impossible. Recent experience in the US with the Vogtle and Sumner nuclear plant expansions indicates that a new reactor requires 8 to 10 years to construct.
Finding locations for the plants, and finding adequate cooling water for that many plants would also be essentially impossible. Nuclear plants consume approximately 4 times as much water per kWh generated compared to a CCGT plant described above (see link to "Nuclear plants use far more fresh water than other power plants").
In addition, if the country were to "go nuclear" to replace coal, it is necessary to replace the existing fleet of approximately 100 aging, operating nuclear plants as they will (almost) all be beyond their service lives of 40 to 60 year with the passage of another 20 years time. Therefore, the build requirement is then 320 new PWR nuclear power plants.
Finally, the price impact on consumers, whether residential, commercial, or industrial would be catastrophic from building that many nuclear power plants, as described in some detail (see link) in "Preposterous Power Pricing." Replacing coal power with nuclear power is simply not an option.
In-Situ Coal Gasification
A potential option, but one that has not shown any hope of economic practicality, is to convert the residual coal left in the existing mines into a viable form of synthesis-gas that can be brought to the surface and burned in power plants. The basis for this is that approximately one-half of a coal deposit remains in the ground after all the economically mine-able coal is produced. That figure varies from mine to mine. The concept is not new and has been the subject of some research over the decades. Even if gasification can be accomplished, a substantial hurdle exists to convey the low-Btu synthesis gas via pipeline to the power plants. New power plants would be required, or substantial modification to existing plants to accommodate the heating characteristics of the synthesis-gas.
Increase Renewables With Storage
After exhausting the other avenues as impractical or hopelessly expensive (other than building CCGT plants), what is left is the renewable energy systems. Noting that 15 of the 20 generating technologies listed above are renewable, there is substantial opportunity for competition between technologies. It is very likely that solar will be deployed where the resource is adequate, and some form of storage will accompany the solar plants.
Wind, however, will likely be the major player in replacing coal, along with CCGT. Wind plants require some form of storage to make the energy reliable. Off-shore wind systems can use the submerged spheres hydroelectric technology. There is plenty of wind offshore, with the US' Minerals and Mining Service estimating in 2009 that 900 GigaWatts of energy can be economically produced offshore the US coasts. Half of that is along the Atlantic seaboard. (900 GWatts is almost 10 times the installed capacity of all the nuclear power plants in the US)
Conclusion
Unless some way to produce more coal from existing mines is discovered in the very near future, the US is headed to a fundamental change in the way the electric power grids are supplied. Coal, which has powered much of the country for more than 100 years, is about to run out. It appears that the current presidential administration is not emphasizing this fact, but has chosen the theme of Climate Change and Man-Made Global Warming due to Carbon Pollution as the vehicle to phase out coal-power and encourage renewable energy systems.
The most likely outcome will be a combination of natural gas-fired CCGT plants with wind turbines both onshore and offshore, and suitable ocean-based storage, to meet the electricity demands. It is little wonder, then, that Congress continues to renew the small incentives and subsidies for renewable energy systems. The time has come for the power in the sunshine, and the wind, to step up and be counted.
Meanwhile, the age of the nuclear power plant is essentially over. As described in many Truth About Nuclear Power articles on SLB and in many other places, the nuclear plants are far too expensive, take far too long to build, and have unacceptable risks of radiation releases, meltdowns, and catastrophic health hazards and environmental destruction.
The next 20 years will indeed be interesting to observe. The Grand Game in the US, as it relates to the electrical power grid, will be a fine subject to watch as all this plays out.
UPDATE: 1 - Extending the 20 year deadline: Some calculations show that we have a bit more than 20 years, perhaps 40 years, if two things occur. One, no more coal-fired power plants are built and we simply retire aging plants as scheduled over the next 20 years. Approximately one-half of all the coal-fired plants would normally be retired and shut down in a 20-year period, given a 40 year normal service life. That, alone, will extend the life-time of the coal reserves as less coal is produced each year. Two, in addition to not building new plants and retiring aging plants on schedule, a reasonable fraction of the remaining least-efficient plants are shut down and their output replaced as discussed above: CCGT plants and wind with storage.
That, then, is the key parameter to watch: No new coal-fired plants to be built in the next 20 years, and aging existing plants are retired on-schedule or a bit earlier. -- end update 1 )
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell, all rights reserved
(Note, see Update below)
One of the several themes on SLB is energy supply, as at times articles on the Grand Game appear in which various aspects of US national and international energy are discussed. As time permits, I conduct personal research into those various aspects. In general, energy supply is categorized as coal, natural gas, petroleum, hydroelectric, nuclear, wind, solar, geothermal, tidal, wave, river and ocean currents, and bio-fuels such as ethanol, bio-gas, and bio-diesel. There are a few others, too, such as municipal solid waste (MSW), waste fuel as cogeneration feed, waste treatment plant sludge conversion to methane, ocean thermal electric conversion (OTEC), and direct osmosis using fresh river water and the saline gradient into ocean water. (Update: and algae-to-oil as another bio-fuel. )
Many of these have several variations, so that the 20 categories listed above easily have 50 or more distinct types. Each has advantages, disadvantages, environmental impacts, economics, resource and land-use requirements, grid impacts, and other aspects. As an example of different grid-scale electric generating power plants, a recent study (cited in several SLB articles) by the California Energy Commission in 2009 lists 21 different technologies including baseload, peaking, and intermittent sources. ( see link. )
An earlier article on SLB (May, 2014) had the following, with respect to the world running out of coal in the 50 to 60 year time-frame: (see link to "Coal Exhaustion Looms - Renewable Energy to the Rescue")
That statement is a bit vague on what reserves of coal are included, it should be improved by stating the "world economic recoverable reserves" of coal will be exhausted in roughly 60 to 70 years.
However, the US coal domestic supply and demand picture is quite a bit gloomier: the coal will run out in approximately 20 years. (see link to USGS' 2009 National Coal Resource Assessment Overview) That is, by 2035, every coal-fired power plant in the US will be out of fuel. With coal-fired power plants providing approximately 40 percent of the US electricity today, see pie-chart at right, and only 20 years in which to identify and build replacement power supplies, perhaps it is no wonder that the current federal administration is pushing coal to the sidelines and assisting renewables. The USGS shows economical recoverable reserves to be a bit more than 28 billion tons in 2009, and 1.1 billion tons annual production. Today, six years later, the reserves are at approximately 21 billion tons, and production has declined to just under 1 billion tons per year, leaving 21 divided by 1 for approximately 20 years of coal remaining.
Replacing the domestic coal-power can be via several alternatives: importing coal from overseas, increasing construction of natural gas-fired plants, building 200 nuclear plants, or increasing renewable production. Of course, a crash program to reduce electricity use would also play a role, but not a very large role. Any increased efficiencies would be offset by increased economic growth. Another possibility is by in-situ coal gasification, gas collection, cleanup, and distribution to power plants.
Importing Coal
Other countries are also running out of coal and are importing coal to run their power plants. India, China, Korea, and Japan are a few examples. Importing coal requires port and rail infrastructure to unload the ships, store the coal on shore, then load the coal into rail cars for delivery to the power plants. A major concern is security of energy supply with coal ships shuttling over the oceans.
Build Natural Gas Power Plants
The US has abundant natural gas due to advances in precision directional drilling and hydraulic fracturing in gas-bearing rock formations. Gas price is low at approximately $4 per million Btu. Combined cycle gas turbine power plants are very efficient at approximately 60 percent, and use very little water for cooling compared to coal and especially compared to nuclear plants. CCGT can also be built rapidly and are mature technology with predictable startup dates and finished costs. CCGT plants also have desirable operating characteristics of load-following or baseload operation.
Build 200 Nuclear Plants
Another option to replace coal power is to build approximately 200 nuclear power plants using the Pressurized Water Reactor design at 1,000 MW each. However, with the plants running at less than 100 percent, it is likely that at least 220 nuclear plants would be required. But, getting 220 nuclear plants through the regulatory approval process, licenses to construct issued, and building the plants so that all start up within the 20 year deadline is essentially impossible. Recent experience in the US with the Vogtle and Sumner nuclear plant expansions indicates that a new reactor requires 8 to 10 years to construct.
Finding locations for the plants, and finding adequate cooling water for that many plants would also be essentially impossible. Nuclear plants consume approximately 4 times as much water per kWh generated compared to a CCGT plant described above (see link to "Nuclear plants use far more fresh water than other power plants").
In addition, if the country were to "go nuclear" to replace coal, it is necessary to replace the existing fleet of approximately 100 aging, operating nuclear plants as they will (almost) all be beyond their service lives of 40 to 60 year with the passage of another 20 years time. Therefore, the build requirement is then 320 new PWR nuclear power plants.
Finally, the price impact on consumers, whether residential, commercial, or industrial would be catastrophic from building that many nuclear power plants, as described in some detail (see link) in "Preposterous Power Pricing." Replacing coal power with nuclear power is simply not an option.
In-Situ Coal Gasification
A potential option, but one that has not shown any hope of economic practicality, is to convert the residual coal left in the existing mines into a viable form of synthesis-gas that can be brought to the surface and burned in power plants. The basis for this is that approximately one-half of a coal deposit remains in the ground after all the economically mine-able coal is produced. That figure varies from mine to mine. The concept is not new and has been the subject of some research over the decades. Even if gasification can be accomplished, a substantial hurdle exists to convey the low-Btu synthesis gas via pipeline to the power plants. New power plants would be required, or substantial modification to existing plants to accommodate the heating characteristics of the synthesis-gas.
Increase Renewables With Storage
After exhausting the other avenues as impractical or hopelessly expensive (other than building CCGT plants), what is left is the renewable energy systems. Noting that 15 of the 20 generating technologies listed above are renewable, there is substantial opportunity for competition between technologies. It is very likely that solar will be deployed where the resource is adequate, and some form of storage will accompany the solar plants.
Wind, however, will likely be the major player in replacing coal, along with CCGT. Wind plants require some form of storage to make the energy reliable. Off-shore wind systems can use the submerged spheres hydroelectric technology. There is plenty of wind offshore, with the US' Minerals and Mining Service estimating in 2009 that 900 GigaWatts of energy can be economically produced offshore the US coasts. Half of that is along the Atlantic seaboard. (900 GWatts is almost 10 times the installed capacity of all the nuclear power plants in the US)
Conclusion
Unless some way to produce more coal from existing mines is discovered in the very near future, the US is headed to a fundamental change in the way the electric power grids are supplied. Coal, which has powered much of the country for more than 100 years, is about to run out. It appears that the current presidential administration is not emphasizing this fact, but has chosen the theme of Climate Change and Man-Made Global Warming due to Carbon Pollution as the vehicle to phase out coal-power and encourage renewable energy systems.
The most likely outcome will be a combination of natural gas-fired CCGT plants with wind turbines both onshore and offshore, and suitable ocean-based storage, to meet the electricity demands. It is little wonder, then, that Congress continues to renew the small incentives and subsidies for renewable energy systems. The time has come for the power in the sunshine, and the wind, to step up and be counted.
Meanwhile, the age of the nuclear power plant is essentially over. As described in many Truth About Nuclear Power articles on SLB and in many other places, the nuclear plants are far too expensive, take far too long to build, and have unacceptable risks of radiation releases, meltdowns, and catastrophic health hazards and environmental destruction.
The next 20 years will indeed be interesting to observe. The Grand Game in the US, as it relates to the electrical power grid, will be a fine subject to watch as all this plays out.
UPDATE: 1 - Extending the 20 year deadline: Some calculations show that we have a bit more than 20 years, perhaps 40 years, if two things occur. One, no more coal-fired power plants are built and we simply retire aging plants as scheduled over the next 20 years. Approximately one-half of all the coal-fired plants would normally be retired and shut down in a 20-year period, given a 40 year normal service life. That, alone, will extend the life-time of the coal reserves as less coal is produced each year. Two, in addition to not building new plants and retiring aging plants on schedule, a reasonable fraction of the remaining least-efficient plants are shut down and their output replaced as discussed above: CCGT plants and wind with storage.
That, then, is the key parameter to watch: No new coal-fired plants to be built in the next 20 years, and aging existing plants are retired on-schedule or a bit earlier. -- end update 1 )
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell, all rights reserved
Labels:
coal shortage,
energy,
grand game,
nuclear power,
solar,
wind
Monday, December 21, 2015
Vogtle Nuclear Expansion Nears $21 Billion
Subtitle: Vogtle Nuclear Plants Cost More Than $10 Billion Each
"The cost of the new reactors, originally projected at $14 billion, is now (4Q 2015) close to $19 billion and might reach $21 billion, according to recent PSC filings.
Georgia Power executives dispute estimates that the costs could be as high as $21 billion, but there’s no question Vogtle has greatly exceeded its original projections.
The project is also running 39 months behind schedule with even more delays predicted. Each day’s delay in completion adds an estimated $2 million to the total cost.
These cost increases are bad news for Georgia Power’s customers and also for those who get their electricity from EMCs and municipal electric companies." -- see link
-- From the Columbia County News Times. h/t to commenter Rex Berglund
The recent news from Georgia, where the Vogtle plant expansion is being built, keeps getting worse and worse - exactly as predicted on SLB. In a classic bait-and-pay-later move, the project's proponents sold the project to the regulators using a low-ball estimate, and now that billions have been spent, will turn to the "we must finish it to avoid wasting all the money already spent." This is how a nuclear plant ends up being finished years late, and billions of dollars over the original budget. Vogtle is presently just over 3 years behind schedule, and $5 billion over the budget. With years yet to go, there is plenty of time for yet more delays to occur, more cost over-runs, and each year of delay can add $1 billion or more to the cost.
Construction delays, as described elsewhere on SLB, (see link) include tearing out and re-working faulty construction, equipment suppliers providing late or defective items, serious adverse weather, unforeseen site conditions, and redesign for new NRC requirements. Also, delays can be caused by worker slowdowns, lawsuits for allowable causes, owner-contractor disputes, faulty design that requires corrections, acts of God or the enemy (force majeur), improper scheduling by the contractor, inadequate workforce staffing or untrained workforce (learning on the job), poor supervision, and others.
A few years ago, before the Vogtle construction started, I speculated on SLB that whichever US utility was the first to build a new nuclear plant would serve as a warning to all others who might be contemplating building more nuclear plants. The cost overruns continue, the long delays in completion continue, both of which un-necessarily increase the price of electricity to the consumers. Meanwhile, alternatives to buying from the utility not only exist, they are increasing.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
"The cost of the new reactors, originally projected at $14 billion, is now (4Q 2015) close to $19 billion and might reach $21 billion, according to recent PSC filings.
Georgia Power executives dispute estimates that the costs could be as high as $21 billion, but there’s no question Vogtle has greatly exceeded its original projections.
The project is also running 39 months behind schedule with even more delays predicted. Each day’s delay in completion adds an estimated $2 million to the total cost.
These cost increases are bad news for Georgia Power’s customers and also for those who get their electricity from EMCs and municipal electric companies." -- see link
-- From the Columbia County News Times. h/t to commenter Rex Berglund
The recent news from Georgia, where the Vogtle plant expansion is being built, keeps getting worse and worse - exactly as predicted on SLB. In a classic bait-and-pay-later move, the project's proponents sold the project to the regulators using a low-ball estimate, and now that billions have been spent, will turn to the "we must finish it to avoid wasting all the money already spent." This is how a nuclear plant ends up being finished years late, and billions of dollars over the original budget. Vogtle is presently just over 3 years behind schedule, and $5 billion over the budget. With years yet to go, there is plenty of time for yet more delays to occur, more cost over-runs, and each year of delay can add $1 billion or more to the cost.
Construction delays, as described elsewhere on SLB, (see link) include tearing out and re-working faulty construction, equipment suppliers providing late or defective items, serious adverse weather, unforeseen site conditions, and redesign for new NRC requirements. Also, delays can be caused by worker slowdowns, lawsuits for allowable causes, owner-contractor disputes, faulty design that requires corrections, acts of God or the enemy (force majeur), improper scheduling by the contractor, inadequate workforce staffing or untrained workforce (learning on the job), poor supervision, and others.
A few years ago, before the Vogtle construction started, I speculated on SLB that whichever US utility was the first to build a new nuclear plant would serve as a warning to all others who might be contemplating building more nuclear plants. The cost overruns continue, the long delays in completion continue, both of which un-necessarily increase the price of electricity to the consumers. Meanwhile, alternatives to buying from the utility not only exist, they are increasing.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Sunday, December 20, 2015
Climate Denialism - Nuclear vs Renewable Energy
Subtitle: They Have a Vain Faith In Nuclear but Not Renewables
A chasm, a gulf, a divide, a great difference in viewpoint exists amongst and between various factions in the climate change advocates and their views of the future sources of energy. This is not new; I wrote on SLB on this some time ago. And, the difference in viewpoint existed long before that.
In short, there are two sides in the climate change debate: the warmists who fervently believe that man's use of fossil fuels is overheating the planet, and the realists who rest assured that any warming is only due to agenda-driven scientists who manipulate the data, plus a small bit of natural warming that has very likely ceased such that cooling has commenced.
There are also two other camps, but they don't follow the dividing line just described. One camp insists that nuclear power plants for electricity generation must be constructed in great numbers and as soon as possible; those in the warmist camp that follow this line do so because (supposedly) nuclear power produces very little of that evil chemical, or pollutant as they like to call it, carbon dioxide (CO2). The other camp holds that nuclear plants for generating power are absurdly expensive, dangerous, and prodigiously wasteful of water that is required for cooling, and for those reasons should never be built, plus, the advances in renewable forms of energy are sufficiently great that renewables should be the generating sources for the future.
As any followers of SLB already know, my description is of climate realist (there is no man-made warming except by way of data manipulation), nuclear power is absurdly expensive, dangerous, and wasteful of water, and renewable energy is indeed the way of the future.
This article attempts to explore a few of the issues.
A recent article (see link) in the UK's Guardian newspaper, authored by Naomi Orestes, argued for renewable energy as the best way to "decarbonize the economy," and not via nuclear power. Orestes is a confirmed and vocal warmist on the climate change issue.
What is interesting is the direction in trends for costs of nuclear power (it is going up year by year) and renewables such as wind and solar (they are each declining rapidly year by year). The argument that is typically advanced for nuclear over renewables is that wind and solar energy are not reliable, not consistent, and therefore either fossil-fueled backup plants are required, or terribly expensive electricity storage is required. To a great extent the intermittency of wind and solar is true. Those issues can be (and are in some cases) minimized or made irrelevant by good engineering. For example, taller wind turbines reach faster and more consistent winds. Offshore wind systems have a much greater consistency and production. Thermal storage for solar energy allows generation after the sun sets.
But, perhaps the biggest advantage of renewables is the opportunity to provide reliable, dispatchable power-when-we-need-it via a form of pumped storage hydroelectric. Massachusetts Institute of Technology, MIT, recently proposed (and has a patent pending) an under-sea hollow spherical storage system that is coupled to off-shore wind turbines. The electricity from the offshore wind systems pumps water out of the hollow spheres into the surrounding ocean, whenever the wind blows, night or day. When power is needed on land, ocean water then flows into the submerged spheres via standard hydroelectric turbines and generators. This provides reliable, on-demand power. With multiple wind turbines and multiple spheres, power is extremely reliable. SLB has a post on the MIT storage spheres from June, 2014, see link.
With adequate economy of scale and cost reductions via experience and volume production, the added cost of the storage for wind energy will decrease. As shown below, the year-by-year costs of offshore wind are expected to decline (see Figure 12 below, taken from "COMPARATIVE COSTS OF CALIFORNIA CENTRAL STATION ELECTRICITY GENERATION" - January 2010. ) Also from the same source, and shown below as Figure 11, the cost of nuclear power is increasing.
A question then remains of resource adequacy, as Is there sufficient wind offshore and / or onshore to provide the energy needs of the US, and by extension, the world?
The answer is an unqualified yes. Per the U.S. Department of Interior, Mineral Mining Service, MMS, in their January 2009 "Draft Proposed Outer Continental Shelf Oil and Gas Leasing Program 2010 – 2015," there is plenty of power in the waves and wind offshore.
The MMS stated:
“The U.S. Department of Energy (DOE) estimates that more than 900,000 megawatts (GW), close to the total current installed U.S. electrical capacity, of potential wind energy exists off the coasts of the United States, often near major population centers, where energy costs are high and land-based wind development opportunities are limited. Slightly more than half of the country’s identified offshore wind potential is located off the New England and Mid-Atlantic Coasts, where water depths generally deepen gradually with distance from the shore. Development of offshore wind energy technologies has the potential to provide up to 70,000 MW of domestic generating capacity to the nation’s electric grid by 2025."
When the offshore wind resources are combined with existing onshore wind, hydroelectric, and solar, much of the nation's energy can be supplied without resorting to nuclear power or fossil fuels, should eliminating fossil fuels ever become necessary.
It seems that the nuclear advocates are placing their hopes vainly on the costs of nuclear power plants somehow being reduced, yet the facts show that the nuclear plants cost more and more as they are built. Meanwhile, engineers are reducing the costs of wind turbines, and most importantly, increasing the height of the support towers so that larger and more economic turbines can be installed in the stronger, more reliable winds that exist at greater heights. The technology for the MIT storage spheres is nothing novel, simply reinforced concrete spheres with proven hydroelectric turbines attached.
Finally, even though there is zero cause for alarm from man-made global warming, there is a serious need to de-nuclearize the power grids around the world in favor of safe, renewable, cost-effective renewable energy such as wind coupled to storage.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
A chasm, a gulf, a divide, a great difference in viewpoint exists amongst and between various factions in the climate change advocates and their views of the future sources of energy. This is not new; I wrote on SLB on this some time ago. And, the difference in viewpoint existed long before that.
In short, there are two sides in the climate change debate: the warmists who fervently believe that man's use of fossil fuels is overheating the planet, and the realists who rest assured that any warming is only due to agenda-driven scientists who manipulate the data, plus a small bit of natural warming that has very likely ceased such that cooling has commenced.
There are also two other camps, but they don't follow the dividing line just described. One camp insists that nuclear power plants for electricity generation must be constructed in great numbers and as soon as possible; those in the warmist camp that follow this line do so because (supposedly) nuclear power produces very little of that evil chemical, or pollutant as they like to call it, carbon dioxide (CO2). The other camp holds that nuclear plants for generating power are absurdly expensive, dangerous, and prodigiously wasteful of water that is required for cooling, and for those reasons should never be built, plus, the advances in renewable forms of energy are sufficiently great that renewables should be the generating sources for the future.
As any followers of SLB already know, my description is of climate realist (there is no man-made warming except by way of data manipulation), nuclear power is absurdly expensive, dangerous, and wasteful of water, and renewable energy is indeed the way of the future.
This article attempts to explore a few of the issues.
A recent article (see link) in the UK's Guardian newspaper, authored by Naomi Orestes, argued for renewable energy as the best way to "decarbonize the economy," and not via nuclear power. Orestes is a confirmed and vocal warmist on the climate change issue.
What is interesting is the direction in trends for costs of nuclear power (it is going up year by year) and renewables such as wind and solar (they are each declining rapidly year by year). The argument that is typically advanced for nuclear over renewables is that wind and solar energy are not reliable, not consistent, and therefore either fossil-fueled backup plants are required, or terribly expensive electricity storage is required. To a great extent the intermittency of wind and solar is true. Those issues can be (and are in some cases) minimized or made irrelevant by good engineering. For example, taller wind turbines reach faster and more consistent winds. Offshore wind systems have a much greater consistency and production. Thermal storage for solar energy allows generation after the sun sets.
But, perhaps the biggest advantage of renewables is the opportunity to provide reliable, dispatchable power-when-we-need-it via a form of pumped storage hydroelectric. Massachusetts Institute of Technology, MIT, recently proposed (and has a patent pending) an under-sea hollow spherical storage system that is coupled to off-shore wind turbines. The electricity from the offshore wind systems pumps water out of the hollow spheres into the surrounding ocean, whenever the wind blows, night or day. When power is needed on land, ocean water then flows into the submerged spheres via standard hydroelectric turbines and generators. This provides reliable, on-demand power. With multiple wind turbines and multiple spheres, power is extremely reliable. SLB has a post on the MIT storage spheres from June, 2014, see link.
With adequate economy of scale and cost reductions via experience and volume production, the added cost of the storage for wind energy will decrease. As shown below, the year-by-year costs of offshore wind are expected to decline (see Figure 12 below, taken from "COMPARATIVE COSTS OF CALIFORNIA CENTRAL STATION ELECTRICITY GENERATION" - January 2010. ) Also from the same source, and shown below as Figure 11, the cost of nuclear power is increasing.
Load Following and Intermittent Technologies Cost Trends - CEC 2009 |
The answer is an unqualified yes. Per the U.S. Department of Interior, Mineral Mining Service, MMS, in their January 2009 "Draft Proposed Outer Continental Shelf Oil and Gas Leasing Program 2010 – 2015," there is plenty of power in the waves and wind offshore.
The MMS stated:
“The U.S. Department of Energy (DOE) estimates that more than 900,000 megawatts (GW), close to the total current installed U.S. electrical capacity, of potential wind energy exists off the coasts of the United States, often near major population centers, where energy costs are high and land-based wind development opportunities are limited. Slightly more than half of the country’s identified offshore wind potential is located off the New England and Mid-Atlantic Coasts, where water depths generally deepen gradually with distance from the shore. Development of offshore wind energy technologies has the potential to provide up to 70,000 MW of domestic generating capacity to the nation’s electric grid by 2025."
When the offshore wind resources are combined with existing onshore wind, hydroelectric, and solar, much of the nation's energy can be supplied without resorting to nuclear power or fossil fuels, should eliminating fossil fuels ever become necessary.
It seems that the nuclear advocates are placing their hopes vainly on the costs of nuclear power plants somehow being reduced, yet the facts show that the nuclear plants cost more and more as they are built. Meanwhile, engineers are reducing the costs of wind turbines, and most importantly, increasing the height of the support towers so that larger and more economic turbines can be installed in the stronger, more reliable winds that exist at greater heights. The technology for the MIT storage spheres is nothing novel, simply reinforced concrete spheres with proven hydroelectric turbines attached.
Finally, even though there is zero cause for alarm from man-made global warming, there is a serious need to de-nuclearize the power grids around the world in favor of safe, renewable, cost-effective renewable energy such as wind coupled to storage.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Labels:
climate change,
MIT spheres,
nuclear power,
renewable energy
Saturday, December 19, 2015
Watts et al Show Some Warming is Man-Made
Subtitle: Man-made Warming by Selecting Bad Temperature Sites
In this post, a new study (2015) by Anthony Watts, Evan Jones, John Nielsen-Gammon, and Dr. John Christy is discussed. The Watts 2015 study showed that the USA's temperature trend over 30 years (1979 - 2008) as measured by the US Historic Climate Network stations was too high due to inclusion of temperature measuring sites that had and have artificial heating. Watts 2015 showed that when only properly sited measurement stations are included in the data, the warming trend decreased substantially (2.0 degrees C per century without the "bad" stations, and 3.2 degrees C per century with.) see link to Watts' article announcing the study, which was presented this week at the 2015 Fall Meeting of the American Geophysical Union in San Francisco, California.
Other blogs have articles that discuss the Watts 2015 paper, some with thoughtful comments and of course the usual jibber-jabber. Dr. Judith Curry's blog article is here, see link. JoAnne Nova's blog article is here, see link. Bishop Hill's blog article is here, see link. The Chiefio (E.M. Smith) is away from keyboard (AFK) performing new grandpa duties, but his take is sure to be interesting, perhaps fascinating. See link to Chiefio's blog. This will update to link to his article (If) when it is published. OK. So, that is what some of the others are writing or have written. Why should I write anything on this?
This is a good time and place to set out why I write on climate, and my qualifications. I have written much of this before, and said this in various public speeches. I am a chemical engineer and an attorney practicing in Science and Technology Law. As a chemical engineer, especially one that deals with petroleum refineries, petrochemical plants, toxic chemical plants such as chlorine-caustic, and chemicals such as hydrofluoric acid, hydrochloric acid, sulfuric acid, liquid anhydrous ammonia, and highly explosive trade-secret reaction initiators (to name a few), I am acutely aware of the need to use only good, valid data and to screen and exclude invalid data. In short, if chemical engineers fail to find and exclude invalid data, our chemical processes will leak, spew toxic chemicals, catch fire, explode, and create serious harm and death. We take the data analysis aspect of engineering very seriously, because we must.
As an attorney, I watch the climate science, and some scientists in that arena, with great dismay. There have been many regulations established already (e.g. California AB 32, the "Global Warming Solutions Act of 2006"), multi-lateral treaties (Kyoto Protocol 1997), and non-binding climate agreements (Paris 2015). This is merely a partial list of government acts concerned with climate science. It is instructive that governments require science-based regulations to be based on good science, or best available science. What, exactly, qualifies as best available science is a substantial part of the problem.
The pedigree of the scientists and the scientific organization matters. In the USA, NOAA (the National Oceanic and Atmospheric Administration) is the federal agency that is supposed to not only know what it is doing, but doing their function at the highest level of expertise and accuracy. In pertinent part, NOAA's mission statement reads: "(NOAA's Mission is) To understand and predict changes in climate, weather, oceans, and coasts, To share that knowledge and information with others, . . ." NOAA's home page on the website goes on to state:
"NOAA’s dedicated scientists use cutting-edge research and high-tech instrumentation to provide citizens, planners, emergency managers and other decision makers with reliable information they need when they need it.
NOAA's roots date back to 1807, when the Nation’s first scientific agency, the Survey of the Coast, was established. Since then, NOAA has evolved to meet the needs of a changing country. NOAA maintains a presence in every state and has emerged as an international leader on scientific and environmental matters." (bold, underline added - RES)
NOAA, then, is the expert, the professional, the go-to agency that not only monitors the climate, but makes sense out of the data and presents trends and conclusions for decision-makers. One would expect, then, that their results can be trusted. Yet, they cannot.
In a nutshell, what Watts 2015 did was attempt to analyze 30 years of temperature data from NOAA's measuring stations located across the USA, with the express purpose of identifying bad stations and excluding them from the data. The measuring stations comprise 1218 weather stations in the USHCN, the Historical Climate Network, which were critically examined by Watts and others for allocation into 5 categories of Excellent to Incredibly Bad (my terminology, not theirs). Actually, Category 1 is best, and 5 is worst. Watts 2015 focused their attention on the two top categories, 1 and 2. Examples of the differences between a 1 and a 5 are: a 1 is in an open grass field a safe distance from any artificial heat source, while a 5 can be on an asphalt parking lot adjacent to a dark brick building and in the path of an air conditioner condenser exhaust.
Watts 2015 apparently included both Category 1 and 2 as producing acceptable temperatures, excluding all Category 3, 4, and 5. Note, however, that NOAA includes all the stations, with various corrections applied as they see as appropriate. But, Watts 2015 went further, they excluded many stations in Category 1 and 2 due to issues such as non-continuous location (somebody physically moved the station over the years, as happened here in Los Angeles a few years ago). Watts 2015 also excluded stations that had corrections for Time of Observation, and any stations that had an equipment change over the years. Watts 2015 was left with 410 stations that met their criteria, or approximately one-third of the 1,218 total stations.
(Note: a concern arises at this point. Referring to the Fall et al 2011 paper that also was co-authored by Watts, Jones, Nielsen-Gammon, and Christy, the USHCN is purported to have 1221 stations, not 1218. Also, from Figure 1 of Fall 2011, the Category 1 and 2 stations combined are 7.9 percent of 1007 stations that were surveyed for critical assessment and placement into the 5 categories. Some simple math shows that something does not add up. It appears that 80 stations out of the 1007 represents 7.9 percent. Even allowing for all of the remaining, un-surveyed stations to wind up in Category 1 or 2, (highly unlikely), that yields 1221 minus 1007, or 214 stations that can be added to the 80 from above. That provides only 294 stations at most. Where, then, did Watts 2015 obtain 410 stations in Category 1 and 2? I hope to find an answer to this simple math question. It appears that Watts 2015 excluded some of the Category 1 and 2 stations based on this statement from Watts on WUWT: "It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions." Therefore, if 294 were available at most, but some were excluded, how did Watts 2015 end up with 410 valid, Category 1 or 2 stations? )
Now to the second point: end-point influences. Watts 2015 chose the time period for analysis to be 1979 to and including 2008. The reason given for this time period is that this paper is designed to challenge and rebut the conclusions produced in two other papers, Menne et al 2009, and 2010. The Menne papers were written purportedly to defend NOAA's methodology for including badly-sited stations, correcting the measured temperatures, and including those temperatures in the database and analysis. As Watts wrote on his blog (see link above):
"Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN (should be USHCN - RES) from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison."
As written several times on SLB, there is a problem with any study that uses the late 1970s as a starting point for a time-series trend of air temperatures, and extending that trend to claim the climate is warming. I wrote on this back in February 2010, and spoke on this in a public speech to chemical engineers in 2012. The USA had severe winters in the period 1977, 78, and 79 as documented in many articles at the time (see link for one such article), and shown in temperature graphs from many US cities (see link). One such temperature graph is shown below to illustrate, this is from Abilene, Texas and the Hadley Climate Research Center HadCRUT3 dataset:
The significant portion of this Abilene graph is the cluster of low temperatures around 1977, 78, and 79. When any data set starts with a very low value, and the remaining data oscillates around with very little trend, the resulting trend will be upward.
(A side note on severity of the winter of 1978-1979 in Illinois, excerpted from the linked article above:
"For the first time since modern weather records began in the 1880s, a third consecutive
severe winter occurred in Illinois in 1978-1979. Seventeen major winter storms, the state's
record coldest January-February, and record snow depths on the ground gave the winter
of 1978-1979 a rank as the second worst statewide for Illinois, exceeded only by the prior
winter of 1977-1978 (18 storms, coldest statewide December-March, record longest lasting
snow cover). In the northern fourth of Illinois, 1978-1979 was the worst winter on
record.
Severe storms began in late November and extended into March; the seven major storms
in January set a new record high for the month, the four in February tied the previous
record, and the four in December fell one short of the record. Fourteen storms also had
freezing rain, but ice was moderately severe in only two cases. High wind and blizzard
conditions occurred in only three storms (compared with eight in prior winter), suggesting
a lack of extremely deep low pressure centers. Most storms occurred with Texas lows,
Colorado (north track) lows, and miscellaneous synoptic conditions. The super storm of
11-14 January set a point snow record of 24 inches, left snow cover of more than 3 inches
over 77% of the state, and lasted 56 hours.
Snowfall for the 1978-1979 winter averaged 68 inches (38 inches above normal) in
northern Illinois, 40 inches (20 above) in the north central part, 32 inches (12 above) in
south central Illinois, and 31 inches (22 above) in southern Illinois. Record totals of 60 to
100 inches occurred in northern Illinois. The winter temperatures averaged 7.8 F below
normal in northern Illinois and about 7 below in the rest of the state. January-February
temperatures averaged a record low of 15.9 F, 14 degrees below normal, and prevented
melting between storms so that record snow depths of more than 40 inches occurred in
northern Illinois." )
The Watts 2015 study should not, in my opinion, be judged as conclusive on the issue of warming occurred at the rate of 2 degrees C per century. The starting point of 1979 is low due to severe winters in that time.
Therefore, it can be concluded that Watts 2015 can be commended for showing the NOAA methodology overstates the warming by 60 percent (3.2 degrees versus 2.0 degrees per century.) A necessary next step is to do what I have recommended for years (not only I, as others have also noted this and stated the obvious): find temperature records from the pristine areas across the USA, in national parks and other undisturbed areas. Use those records. It may be that such records do not exist, and that may be why so much effort is expended as Watts 2015 did, also Fall 2011. However, it would seem to be a trivial task to obtain small-town newspaper archives and collect the published temperatures from across America back 100 years.
Finally, it is certainly misleading, and quite possibly fraudulent, to make claims of global warming by analyzing data that begins with the late 1970s. One could do worse, however, by starting the data in 1977 (that gives 3 years of low temperatures to start), and end the data on a high-temperature year such as 2000 (that gives 2 years of high temperatures to end on.) Note that fraud in the legal context has many elements that must be proven, one of which is intent to obtain property from another. No assertion of fraud is made, nor to be implied, against any of the authors nor organizations mentioned in this article. Instead, Watts and co-authors, and Menne and co-authors were probably doing the best they know how, given the motivations and constraints at the time. It is also noteworthy that the entire Watts 2015 paper has not been published yet, and all comments above are based on my best understanding of what is published on WattsUpWithThat.
We must, however, do better. There must be no data adjustments, no room for bias, and no end-point issues as described above. Watts 2015 apparently tried to find unadjusted data, even though it is unclear how 410 stations exist in 2015 while only 80 or so existed in 2011.
Next, we must have a study that shows what warming, or cooling, if any, occurs in pristine locations without any starting and ending data issues. Such data is slowly being produced via the USCRN stations, (Climate Reference Network) see link.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
In this post, a new study (2015) by Anthony Watts, Evan Jones, John Nielsen-Gammon, and Dr. John Christy is discussed. The Watts 2015 study showed that the USA's temperature trend over 30 years (1979 - 2008) as measured by the US Historic Climate Network stations was too high due to inclusion of temperature measuring sites that had and have artificial heating. Watts 2015 showed that when only properly sited measurement stations are included in the data, the warming trend decreased substantially (2.0 degrees C per century without the "bad" stations, and 3.2 degrees C per century with.) see link to Watts' article announcing the study, which was presented this week at the 2015 Fall Meeting of the American Geophysical Union in San Francisco, California.
Other blogs have articles that discuss the Watts 2015 paper, some with thoughtful comments and of course the usual jibber-jabber. Dr. Judith Curry's blog article is here, see link. JoAnne Nova's blog article is here, see link. Bishop Hill's blog article is here, see link. The Chiefio (E.M. Smith) is away from keyboard (AFK) performing new grandpa duties, but his take is sure to be interesting, perhaps fascinating. See link to Chiefio's blog. This will update to link to his article (If) when it is published. OK. So, that is what some of the others are writing or have written. Why should I write anything on this?
This is a good time and place to set out why I write on climate, and my qualifications. I have written much of this before, and said this in various public speeches. I am a chemical engineer and an attorney practicing in Science and Technology Law. As a chemical engineer, especially one that deals with petroleum refineries, petrochemical plants, toxic chemical plants such as chlorine-caustic, and chemicals such as hydrofluoric acid, hydrochloric acid, sulfuric acid, liquid anhydrous ammonia, and highly explosive trade-secret reaction initiators (to name a few), I am acutely aware of the need to use only good, valid data and to screen and exclude invalid data. In short, if chemical engineers fail to find and exclude invalid data, our chemical processes will leak, spew toxic chemicals, catch fire, explode, and create serious harm and death. We take the data analysis aspect of engineering very seriously, because we must.
As an attorney, I watch the climate science, and some scientists in that arena, with great dismay. There have been many regulations established already (e.g. California AB 32, the "Global Warming Solutions Act of 2006"), multi-lateral treaties (Kyoto Protocol 1997), and non-binding climate agreements (Paris 2015). This is merely a partial list of government acts concerned with climate science. It is instructive that governments require science-based regulations to be based on good science, or best available science. What, exactly, qualifies as best available science is a substantial part of the problem.
The pedigree of the scientists and the scientific organization matters. In the USA, NOAA (the National Oceanic and Atmospheric Administration) is the federal agency that is supposed to not only know what it is doing, but doing their function at the highest level of expertise and accuracy. In pertinent part, NOAA's mission statement reads: "(NOAA's Mission is) To understand and predict changes in climate, weather, oceans, and coasts, To share that knowledge and information with others, . . ." NOAA's home page on the website goes on to state:
"NOAA’s dedicated scientists use cutting-edge research and high-tech instrumentation to provide citizens, planners, emergency managers and other decision makers with reliable information they need when they need it.
NOAA's roots date back to 1807, when the Nation’s first scientific agency, the Survey of the Coast, was established. Since then, NOAA has evolved to meet the needs of a changing country. NOAA maintains a presence in every state and has emerged as an international leader on scientific and environmental matters." (bold, underline added - RES)
NOAA, then, is the expert, the professional, the go-to agency that not only monitors the climate, but makes sense out of the data and presents trends and conclusions for decision-makers. One would expect, then, that their results can be trusted. Yet, they cannot.
In a nutshell, what Watts 2015 did was attempt to analyze 30 years of temperature data from NOAA's measuring stations located across the USA, with the express purpose of identifying bad stations and excluding them from the data. The measuring stations comprise 1218 weather stations in the USHCN, the Historical Climate Network, which were critically examined by Watts and others for allocation into 5 categories of Excellent to Incredibly Bad (my terminology, not theirs). Actually, Category 1 is best, and 5 is worst. Watts 2015 focused their attention on the two top categories, 1 and 2. Examples of the differences between a 1 and a 5 are: a 1 is in an open grass field a safe distance from any artificial heat source, while a 5 can be on an asphalt parking lot adjacent to a dark brick building and in the path of an air conditioner condenser exhaust.
Watts 2015 apparently included both Category 1 and 2 as producing acceptable temperatures, excluding all Category 3, 4, and 5. Note, however, that NOAA includes all the stations, with various corrections applied as they see as appropriate. But, Watts 2015 went further, they excluded many stations in Category 1 and 2 due to issues such as non-continuous location (somebody physically moved the station over the years, as happened here in Los Angeles a few years ago). Watts 2015 also excluded stations that had corrections for Time of Observation, and any stations that had an equipment change over the years. Watts 2015 was left with 410 stations that met their criteria, or approximately one-third of the 1,218 total stations.
(Note: a concern arises at this point. Referring to the Fall et al 2011 paper that also was co-authored by Watts, Jones, Nielsen-Gammon, and Christy, the USHCN is purported to have 1221 stations, not 1218. Also, from Figure 1 of Fall 2011, the Category 1 and 2 stations combined are 7.9 percent of 1007 stations that were surveyed for critical assessment and placement into the 5 categories. Some simple math shows that something does not add up. It appears that 80 stations out of the 1007 represents 7.9 percent. Even allowing for all of the remaining, un-surveyed stations to wind up in Category 1 or 2, (highly unlikely), that yields 1221 minus 1007, or 214 stations that can be added to the 80 from above. That provides only 294 stations at most. Where, then, did Watts 2015 obtain 410 stations in Category 1 and 2? I hope to find an answer to this simple math question. It appears that Watts 2015 excluded some of the Category 1 and 2 stations based on this statement from Watts on WUWT: "It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions." Therefore, if 294 were available at most, but some were excluded, how did Watts 2015 end up with 410 valid, Category 1 or 2 stations? )
Now to the second point: end-point influences. Watts 2015 chose the time period for analysis to be 1979 to and including 2008. The reason given for this time period is that this paper is designed to challenge and rebut the conclusions produced in two other papers, Menne et al 2009, and 2010. The Menne papers were written purportedly to defend NOAA's methodology for including badly-sited stations, correcting the measured temperatures, and including those temperatures in the database and analysis. As Watts wrote on his blog (see link above):
"Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN (should be USHCN - RES) from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison."
As written several times on SLB, there is a problem with any study that uses the late 1970s as a starting point for a time-series trend of air temperatures, and extending that trend to claim the climate is warming. I wrote on this back in February 2010, and spoke on this in a public speech to chemical engineers in 2012. The USA had severe winters in the period 1977, 78, and 79 as documented in many articles at the time (see link for one such article), and shown in temperature graphs from many US cities (see link). One such temperature graph is shown below to illustrate, this is from Abilene, Texas and the Hadley Climate Research Center HadCRUT3 dataset:
The significant portion of this Abilene graph is the cluster of low temperatures around 1977, 78, and 79. When any data set starts with a very low value, and the remaining data oscillates around with very little trend, the resulting trend will be upward.
(A side note on severity of the winter of 1978-1979 in Illinois, excerpted from the linked article above:
"For the first time since modern weather records began in the 1880s, a third consecutive
severe winter occurred in Illinois in 1978-1979. Seventeen major winter storms, the state's
record coldest January-February, and record snow depths on the ground gave the winter
of 1978-1979 a rank as the second worst statewide for Illinois, exceeded only by the prior
winter of 1977-1978 (18 storms, coldest statewide December-March, record longest lasting
snow cover). In the northern fourth of Illinois, 1978-1979 was the worst winter on
record.
Severe storms began in late November and extended into March; the seven major storms
in January set a new record high for the month, the four in February tied the previous
record, and the four in December fell one short of the record. Fourteen storms also had
freezing rain, but ice was moderately severe in only two cases. High wind and blizzard
conditions occurred in only three storms (compared with eight in prior winter), suggesting
a lack of extremely deep low pressure centers. Most storms occurred with Texas lows,
Colorado (north track) lows, and miscellaneous synoptic conditions. The super storm of
11-14 January set a point snow record of 24 inches, left snow cover of more than 3 inches
over 77% of the state, and lasted 56 hours.
Snowfall for the 1978-1979 winter averaged 68 inches (38 inches above normal) in
northern Illinois, 40 inches (20 above) in the north central part, 32 inches (12 above) in
south central Illinois, and 31 inches (22 above) in southern Illinois. Record totals of 60 to
100 inches occurred in northern Illinois. The winter temperatures averaged 7.8 F below
normal in northern Illinois and about 7 below in the rest of the state. January-February
temperatures averaged a record low of 15.9 F, 14 degrees below normal, and prevented
melting between storms so that record snow depths of more than 40 inches occurred in
northern Illinois." )
The Watts 2015 study should not, in my opinion, be judged as conclusive on the issue of warming occurred at the rate of 2 degrees C per century. The starting point of 1979 is low due to severe winters in that time.
Therefore, it can be concluded that Watts 2015 can be commended for showing the NOAA methodology overstates the warming by 60 percent (3.2 degrees versus 2.0 degrees per century.) A necessary next step is to do what I have recommended for years (not only I, as others have also noted this and stated the obvious): find temperature records from the pristine areas across the USA, in national parks and other undisturbed areas. Use those records. It may be that such records do not exist, and that may be why so much effort is expended as Watts 2015 did, also Fall 2011. However, it would seem to be a trivial task to obtain small-town newspaper archives and collect the published temperatures from across America back 100 years.
Finally, it is certainly misleading, and quite possibly fraudulent, to make claims of global warming by analyzing data that begins with the late 1970s. One could do worse, however, by starting the data in 1977 (that gives 3 years of low temperatures to start), and end the data on a high-temperature year such as 2000 (that gives 2 years of high temperatures to end on.) Note that fraud in the legal context has many elements that must be proven, one of which is intent to obtain property from another. No assertion of fraud is made, nor to be implied, against any of the authors nor organizations mentioned in this article. Instead, Watts and co-authors, and Menne and co-authors were probably doing the best they know how, given the motivations and constraints at the time. It is also noteworthy that the entire Watts 2015 paper has not been published yet, and all comments above are based on my best understanding of what is published on WattsUpWithThat.
We must, however, do better. There must be no data adjustments, no room for bias, and no end-point issues as described above. Watts 2015 apparently tried to find unadjusted data, even though it is unclear how 410 stations exist in 2015 while only 80 or so existed in 2011.
Next, we must have a study that shows what warming, or cooling, if any, occurs in pristine locations without any starting and ending data issues. Such data is slowly being produced via the USCRN stations, (Climate Reference Network) see link.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Labels:
Abilene effect,
climate change,
global warming,
NOAA,
USCRN,
USHCN,
watts
Saturday, December 5, 2015
OPEC In Disarray in 2015
Subtitle: A Big Move In The Grand Game
Organization of Oil Exporting Countries, OPEC, met this week and essentially left member states' production levels to their own discretion. Many articles in the media covered this.
This is as I predicted in 2011, almost 5 years ago in my speech to Tulane Law School at their Energy Conference (see link). Competition from shale oil producers, and political instability in the Middle East have each contributed to the disarray in OPEC. In my speech, I predicted oil price would drop to $20 or even $10 per barrel.
The fallout from this will be good in some sectors, and grim in others. The good news is for auto makers, and consumers who purchase gasoline and diesel, also industrial diesel customers, and airlines that purchase jet fuel. Gasoline at under $1 per gallon will be a boon to the consumer.
However, those industries that depend on oil for success will suffer. Texas, for example, had a regional recession when a significant price decline occurred almost 30 years ago in the late 1980s. Real estate prices dropped, many businesses closed, and people moved away from the state seeking better fortune elsewhere.
The key question is who can sustain their output with low oil prices, will it be US shale oil producers, OPEC members, or non-OPEC producers such as Russia? In previous meetings of OPEC, production was held constant on the belief that China would continue to grow economically and purchase crude oil. However, China's growth has slowed and not continued on its rapid double-digit growth rate. The anticipated demand for the oil is not there.
This has to be frustrating for the Obama administration, who recited early in their administration that the price of gasoline must increase to approximately $9 or $10 per gallon. Instead, declining demand due to cars that achieve better fuel economy, and world events have sent prices downward.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell, all rights reserved
Organization of Oil Exporting Countries, OPEC, met this week and essentially left member states' production levels to their own discretion. Many articles in the media covered this.
This is as I predicted in 2011, almost 5 years ago in my speech to Tulane Law School at their Energy Conference (see link). Competition from shale oil producers, and political instability in the Middle East have each contributed to the disarray in OPEC. In my speech, I predicted oil price would drop to $20 or even $10 per barrel.
The fallout from this will be good in some sectors, and grim in others. The good news is for auto makers, and consumers who purchase gasoline and diesel, also industrial diesel customers, and airlines that purchase jet fuel. Gasoline at under $1 per gallon will be a boon to the consumer.
However, those industries that depend on oil for success will suffer. Texas, for example, had a regional recession when a significant price decline occurred almost 30 years ago in the late 1980s. Real estate prices dropped, many businesses closed, and people moved away from the state seeking better fortune elsewhere.
The key question is who can sustain their output with low oil prices, will it be US shale oil producers, OPEC members, or non-OPEC producers such as Russia? In previous meetings of OPEC, production was held constant on the belief that China would continue to grow economically and purchase crude oil. However, China's growth has slowed and not continued on its rapid double-digit growth rate. The anticipated demand for the oil is not there.
This has to be frustrating for the Obama administration, who recited early in their administration that the price of gasoline must increase to approximately $9 or $10 per gallon. Instead, declining demand due to cars that achieve better fuel economy, and world events have sent prices downward.
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell, all rights reserved
Sunday, November 29, 2015
Cities and UHI Urban Heat Islands
Subtitle: Cities and UHI Warming Corrupt Temperature Databases
Much is made over the average global climate,with some insisting the Earth is warming at unprecedented rates, others not so sure, and many convinced that there is zero cause for alarm because the climate scientists who are in charge of the temperature data made various errors. This article explores an aspect of the third category, a serious error in the temperature data that makes any claims of catastrophic warming moot.
The essential concept is that cities, many of them very large, are included in the temperature database that the scientists use. Most of the cities show a rapid warming, which is well-known and named the Urban Heat Island effect (UHI). The UHI is due to the energy consumed in a city that must be dissipated, plus the absorption of solar energy by the land area that must also be dissipated. Each of these is described below.
Cities have energy consumption for a multitude of purposes, including but not limited to building heating, building cooling via air-conditioning, lighting, home and restaurant use such as cooking and heating water, electronics operation, vehicles used in transportation, commercial and industrial use, airports, train operations, and seaports. Energy is also consumed in construction and demolition activities. Much of this energy is in the form of electricity, some is from burning fuels such as coal, home heating oil, propane, and natural gas, and some is from transportation fuels including gasoline, jet fuel, diesel fuel, and fuel oil for ships. From first principles of thermodynamics, all of the energy consumed, or input into the system, must be either stored or rejected to a heat sink. Engineers will recognize the First Law, which states Energy In = Energy Out plus Accumulation, where Accumulation may be positive or negative. Where Accumulation is zero, then Energy In = Energy Out. The ways that this energy is dissipated, or the Energy Out component of the First Law, are explored next.
A city, being comprised of static elements (buildings, roads, and such) plus dynamic elements (people moving, vehicles moving, machinery moving and such) can dissipate heat energy in all of the three ways of energy transfer. Those three ways are by conduction, by convection, and by radiation. Here, radiation refers not to nuclear ionizing radiation but to heat transfer by electromagnetic radiation in the infra-red spectrum, what is commonly known as radiant heat. Conduction is the transfer of heat from one body to another by direct contact between the two. A city has direct contact with the land below the city, to some extent with water if that is a part of the city, and the air above the city. On a long-term basis, the amount of heat removed via conduction can be considered a very small fraction of the total Energy Out. Convection is defined as heat transfer by mass motion of a fluid such as air or water when the heated fluid is caused to move away from the source of heat, carrying energy with it. In a city, this would be primarily air blowing past buildings, either by natural wind, thermal air currents, or forced air in some cases. Convection is a significant fraction of total Energy Out in a city. Finally, radiation is the third and significant form of heat transfer in the total Energy Out in a city.
A city can be considered as a collection of vertical heated objects, buildings, that have energy input that must ultimately be rejected as described above. If the energy input is not dissipated or rejected, First Law requires that the buildings will have ever-increasing temperature. We know that this does not happen, therefore the energy is dissipated. One can consider the simple case of a single building on a flat prairie, where the building has 20 floors and stands approximately 200 feet above the prairie. Such a building is shown nearby, a blue-exterior, 22-story building in Oxnard, California, the Financial Plaza Tower. The important aspect of a lone, single building is that radiant energy is free to flow from the building in all directions. The Financial Plaza Tower does have another, smaller building a few blocks away, so that radiant energy in that direction is somewhat impeded.
However, if one considers multiple tall buildings in close proximity to each other, such as occurs in many large cities, the radiant heat cannot escape each building very quickly. Instead, the Stefan-Boltzmann equation for radiative heat transfer (see below) requires that each building shields its neighbors, or bounces the heat back and forth. Only the buildings at the perimeter can radiate heat freely, and then only in the direction away from the other buildings. Thus, a collection of tall buildings in cities must reject their heat primarily via convection. One can experience this first-hand by visiting a city on a warm day with very little wind. The heat accumulates rapidly. Even at night, again with no wind, a city will have warmer temperatures than average. (Aside and personal note, even though anecdotal, I worked for some time in downtown areas of Dallas, Texas, and in Los Angeles, California and experienced the zero-wind high temperatures both in the day and after dark. The same occurs in other cities I have visited. The phenomenon is real and easily observed.)
(Note on Stefan-Boltzmann equation:
Net Radiated Energy per second, E = k A (Th^4-Tc^4)
where k is a constant, A is surface area of the radiating surface, Th is temperature of the hotter surface, and Tc is temperature of the cooler surface, all temperatures in degrees absolute. In this formula, ^n indicates raising the preceding variable to a power, where Th^4 is the Th raised to the fourth power. It is crucial to note that the energy E is the NET radiated energy between the two surfaces. Each surface radiates at a rate governed by its own surface temperature. Therefore, where two surfaces are at the same temperature, ZERO energy is radiated away on a net basis. )
It can be seen, then, that cities have a built-in heating system, if only from the buildings that cannot easily radiate away their heat. Yet there are many other aspects of city heat, as described above. The concentration of vehicles that burn fuel and emit heat via the exhaust, the cooling system, and hot engine also raises the air temperature in a city.
Every electric motor in a city also produces heat that must be dissipated. Every air conditioning system also sends heat into the air.
Now to the key point: cities will have energy consumption and heat rejection issues no matter what type of system produces that energy. Considering for the moment electricity use, even if a city were all-electric for heating, cooking, and transportation, and even if that electricity were produced by zero-carbon-dioxide power plants (see below), the UHI would exist. In essence, a building has no idea what produced the electricity that heats the building, runs the lights and elevators, and heats the hot water. An electric car, or bus, or delivery truck, or train, also has no idea what produced the electricity that each of those consumes. Therefore, even if all the electricity is from a zero-carbon-dioxide source, the cities would still have UHI and would corrupt the climate scientists' data. Such zero-carbon-dioxide sources include, but are not limited to, hydroelectric, wind, solar, nuclear, geothermal, wave, tidal, ocean current, ocean temperature-difference, water pressure recapture, river mouth osmosis, and river current. There are also carbon-neutral sources: landfill methane, cattle operation methane, Municipal solid waste (MSW), human waste sludge, plant-based ethanol, other bio-fuels,
It is entirely wrong for climate scientists to include any data that is corrupted by UHI.
For completeness, the impact of solar energy on the city is described. Up to this point, only the addition of non-solar energy to a city has been discussed. Sunshine, or solar energy, is absorbed by the city buildings, streets, and other areas. This heat must also be dissipated, and has the same dissipation options as described above: conduction, convection, and radiation. Once the solar energy is absorbed, a building has no idea what caused the increased heat. Therefore, any energy also has the same issues as non-solar energy.
Conclusion
Most of the world's nations will soon convene in Paris, France, to discuss climate change and try to agree on a mechanism and by how much each nation will reduce emissions of carbon dioxide, in the belief that doing so will stop the Earth from warming. It is clear, however, that the climate or land temperature database is corrupted by including UHI. It is essential that the delegates, and policy-makers, understand that there is no man-made global warming due to CO2 emissions. It is scientific error to include in their database the hot cities and other locations where warming is indeed occurring, but would occur no matter what is the source of the energy.
The Goodridge paper shows that zero warming occurred in more than 80 years in counties with small populations, while substantial warming occurred in counties with more than 1 million population. Furthermore, recent data from the USCRN, for pristine sites throughout the USA, shows not only zero warming, but a pronounced cooling. (see link)
For additional reading on UHI, the IPCC report AR5 has quite a bit to say:
https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/drafts/fd/WGIIAR5-Chap8_FGDall.pdf
(click here for link) (Note, this link is to a 113 page pdf that does not automatically download)
More charts and references will be added to support the arguments above.
Below are shown the temperature records of three large US cities: Boston, New York City, and San Francisco. The warming rates, in degrees C per century, are 1.99, 1.49, and 1.49 respectively. These warming rates are in line with what Goodridge reported for the largest counties in California for the 85-year period 1904 to 1996, approximately 1.7 degrees C per century.
For reference, Boston urban area had 4.1 million people in 2010, with a density of 13,000 people per square mile.
New York City urban area had 8.5 million people in 2010 and a density of 27,000 people per square mile.
San Francisco had 4.6 million people in 2010 and a density of 18,000 people per square mile.
In contrast, the small cities shown below, Sacramento, California, and Abilene, Texas had populations and densities as follows.
Sacramento had 460,000 people in 2010 with a density of 4,700 people per square mile.
Abilene had 115,000 people in 2010 with a density of 1,100 people per square mile.
Additional temperature trend graphs similar to those shown above may be examined at this link, where results of 87 cities from 42 states in the USA are posted. The data are from Hadley Climatic Research Center's hadCRUT3 files that were voluntarily released onto the internet, in late 2009, following the Climategate scandal.
Links to previous SLB articles on Global Warming:
Warmists are Wrong, Cooling is Coming (2012)
From Man-made Global Warmist to Skeptic - My Journey (2011)
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Much is made over the average global climate,with some insisting the Earth is warming at unprecedented rates, others not so sure, and many convinced that there is zero cause for alarm because the climate scientists who are in charge of the temperature data made various errors. This article explores an aspect of the third category, a serious error in the temperature data that makes any claims of catastrophic warming moot.
Lone skyscraper in Oxnard, California |
The essential concept is that cities, many of them very large, are included in the temperature database that the scientists use. Most of the cities show a rapid warming, which is well-known and named the Urban Heat Island effect (UHI). The UHI is due to the energy consumed in a city that must be dissipated, plus the absorption of solar energy by the land area that must also be dissipated. Each of these is described below.
Cities have energy consumption for a multitude of purposes, including but not limited to building heating, building cooling via air-conditioning, lighting, home and restaurant use such as cooking and heating water, electronics operation, vehicles used in transportation, commercial and industrial use, airports, train operations, and seaports. Energy is also consumed in construction and demolition activities. Much of this energy is in the form of electricity, some is from burning fuels such as coal, home heating oil, propane, and natural gas, and some is from transportation fuels including gasoline, jet fuel, diesel fuel, and fuel oil for ships. From first principles of thermodynamics, all of the energy consumed, or input into the system, must be either stored or rejected to a heat sink. Engineers will recognize the First Law, which states Energy In = Energy Out plus Accumulation, where Accumulation may be positive or negative. Where Accumulation is zero, then Energy In = Energy Out. The ways that this energy is dissipated, or the Energy Out component of the First Law, are explored next.
A city, being comprised of static elements (buildings, roads, and such) plus dynamic elements (people moving, vehicles moving, machinery moving and such) can dissipate heat energy in all of the three ways of energy transfer. Those three ways are by conduction, by convection, and by radiation. Here, radiation refers not to nuclear ionizing radiation but to heat transfer by electromagnetic radiation in the infra-red spectrum, what is commonly known as radiant heat. Conduction is the transfer of heat from one body to another by direct contact between the two. A city has direct contact with the land below the city, to some extent with water if that is a part of the city, and the air above the city. On a long-term basis, the amount of heat removed via conduction can be considered a very small fraction of the total Energy Out. Convection is defined as heat transfer by mass motion of a fluid such as air or water when the heated fluid is caused to move away from the source of heat, carrying energy with it. In a city, this would be primarily air blowing past buildings, either by natural wind, thermal air currents, or forced air in some cases. Convection is a significant fraction of total Energy Out in a city. Finally, radiation is the third and significant form of heat transfer in the total Energy Out in a city.
A city can be considered as a collection of vertical heated objects, buildings, that have energy input that must ultimately be rejected as described above. If the energy input is not dissipated or rejected, First Law requires that the buildings will have ever-increasing temperature. We know that this does not happen, therefore the energy is dissipated. One can consider the simple case of a single building on a flat prairie, where the building has 20 floors and stands approximately 200 feet above the prairie. Such a building is shown nearby, a blue-exterior, 22-story building in Oxnard, California, the Financial Plaza Tower. The important aspect of a lone, single building is that radiant energy is free to flow from the building in all directions. The Financial Plaza Tower does have another, smaller building a few blocks away, so that radiant energy in that direction is somewhat impeded.
However, if one considers multiple tall buildings in close proximity to each other, such as occurs in many large cities, the radiant heat cannot escape each building very quickly. Instead, the Stefan-Boltzmann equation for radiative heat transfer (see below) requires that each building shields its neighbors, or bounces the heat back and forth. Only the buildings at the perimeter can radiate heat freely, and then only in the direction away from the other buildings. Thus, a collection of tall buildings in cities must reject their heat primarily via convection. One can experience this first-hand by visiting a city on a warm day with very little wind. The heat accumulates rapidly. Even at night, again with no wind, a city will have warmer temperatures than average. (Aside and personal note, even though anecdotal, I worked for some time in downtown areas of Dallas, Texas, and in Los Angeles, California and experienced the zero-wind high temperatures both in the day and after dark. The same occurs in other cities I have visited. The phenomenon is real and easily observed.)
(Note on Stefan-Boltzmann equation:
Net Radiated Energy per second, E = k A (Th^4-Tc^4)
where k is a constant, A is surface area of the radiating surface, Th is temperature of the hotter surface, and Tc is temperature of the cooler surface, all temperatures in degrees absolute. In this formula, ^n indicates raising the preceding variable to a power, where Th^4 is the Th raised to the fourth power. It is crucial to note that the energy E is the NET radiated energy between the two surfaces. Each surface radiates at a rate governed by its own surface temperature. Therefore, where two surfaces are at the same temperature, ZERO energy is radiated away on a net basis. )
It can be seen, then, that cities have a built-in heating system, if only from the buildings that cannot easily radiate away their heat. Yet there are many other aspects of city heat, as described above. The concentration of vehicles that burn fuel and emit heat via the exhaust, the cooling system, and hot engine also raises the air temperature in a city.
Every electric motor in a city also produces heat that must be dissipated. Every air conditioning system also sends heat into the air.
Now to the key point: cities will have energy consumption and heat rejection issues no matter what type of system produces that energy. Considering for the moment electricity use, even if a city were all-electric for heating, cooking, and transportation, and even if that electricity were produced by zero-carbon-dioxide power plants (see below), the UHI would exist. In essence, a building has no idea what produced the electricity that heats the building, runs the lights and elevators, and heats the hot water. An electric car, or bus, or delivery truck, or train, also has no idea what produced the electricity that each of those consumes. Therefore, even if all the electricity is from a zero-carbon-dioxide source, the cities would still have UHI and would corrupt the climate scientists' data. Such zero-carbon-dioxide sources include, but are not limited to, hydroelectric, wind, solar, nuclear, geothermal, wave, tidal, ocean current, ocean temperature-difference, water pressure recapture, river mouth osmosis, and river current. There are also carbon-neutral sources: landfill methane, cattle operation methane, Municipal solid waste (MSW), human waste sludge, plant-based ethanol, other bio-fuels,
It is entirely wrong for climate scientists to include any data that is corrupted by UHI.
For completeness, the impact of solar energy on the city is described. Up to this point, only the addition of non-solar energy to a city has been discussed. Sunshine, or solar energy, is absorbed by the city buildings, streets, and other areas. This heat must also be dissipated, and has the same dissipation options as described above: conduction, convection, and radiation. Once the solar energy is absorbed, a building has no idea what caused the increased heat. Therefore, any energy also has the same issues as non-solar energy.
Conclusion
Most of the world's nations will soon convene in Paris, France, to discuss climate change and try to agree on a mechanism and by how much each nation will reduce emissions of carbon dioxide, in the belief that doing so will stop the Earth from warming. It is clear, however, that the climate or land temperature database is corrupted by including UHI. It is essential that the delegates, and policy-makers, understand that there is no man-made global warming due to CO2 emissions. It is scientific error to include in their database the hot cities and other locations where warming is indeed occurring, but would occur no matter what is the source of the energy.
The Goodridge paper shows that zero warming occurred in more than 80 years in counties with small populations, while substantial warming occurred in counties with more than 1 million population. Furthermore, recent data from the USCRN, for pristine sites throughout the USA, shows not only zero warming, but a pronounced cooling. (see link)
For additional reading on UHI, the IPCC report AR5 has quite a bit to say:
https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/drafts/fd/WGIIAR5-Chap8_FGDall.pdf
(click here for link) (Note, this link is to a 113 page pdf that does not automatically download)
More charts and references will be added to support the arguments above.
Below are shown the temperature records of three large US cities: Boston, New York City, and San Francisco. The warming rates, in degrees C per century, are 1.99, 1.49, and 1.49 respectively. These warming rates are in line with what Goodridge reported for the largest counties in California for the 85-year period 1904 to 1996, approximately 1.7 degrees C per century.
For reference, Boston urban area had 4.1 million people in 2010, with a density of 13,000 people per square mile.
New York City urban area had 8.5 million people in 2010 and a density of 27,000 people per square mile.
San Francisco had 4.6 million people in 2010 and a density of 18,000 people per square mile.
In contrast, the small cities shown below, Sacramento, California, and Abilene, Texas had populations and densities as follows.
Sacramento had 460,000 people in 2010 with a density of 4,700 people per square mile.
Abilene had 115,000 people in 2010 with a density of 1,100 people per square mile.
Here are two smaller cities, Sacramento, California, and Abilene, Texas. These show zero warming, instead, a slight cooling of minus 0.29 and minus 0.19 degrees C per century, respectively.
Additional temperature trend graphs similar to those shown above may be examined at this link, where results of 87 cities from 42 states in the USA are posted. The data are from Hadley Climatic Research Center's hadCRUT3 files that were voluntarily released onto the internet, in late 2009, following the Climategate scandal.
Links to previous SLB articles on Global Warming:
Warmists are Wrong, Cooling is Coming (2012)
From Man-made Global Warmist to Skeptic - My Journey (2011)
Roger E. Sowell, Esq.
Marina del Rey, California
copyright (c) 2015 by Roger Sowell all rights reserved
Labels:
climate change,
climate science,
CO2 emissions,
IPCC,
nuclear energy,
UHI,
Urban heat island,
USCRN
Subscribe to:
Posts (Atom)