Why atmospheric MASS, not radiation? Part 2

Be sure to read Part 1 first, now …



DEFINING THE rGHE THROUGH THE ERL.

How is the rGHE defined in the most basic way? If you have a planet with a massive atmosphere, the strength of its “greenhouse effect” is defined as the difference between its apparent planetary temperature in space and the physical mean global temperature of its actual, solid surface. The planet’s apparent temperature in space is derived simply from its average radiant flux to space, not from any real measured temperature. It is assumed that the planet is in relative radiative equilibrium with its sun, so is – over a certain cycle – radiating out the same total amount of energy as it absorbs.

If we apply this definition to Venus, we find that the strength of its rGHE is [737-232=] 505 K. Earth’s is [288-255=] 33 K.

The averaged planetary flux to space is conceptually seen as originating from a hypothetical blackbody “surface” or ‘radiating level’ somewhere inside the planetary system, tied specifically to a calculated emission temperature. This level can be viewed as the ‘average depth of upward radiation’ or the ‘apparent emitting surface’ of the planet as seen from space. Normally it is termed the ERL (‘effective radiating level’) or EEH (‘effective emission height’).

The idea behind the ERL is pretty straightforward, but does it accord with reality? The apparent planetary temperature of Venus in space is 231-232K, based on its average radiant flux, 163 W/m2. Likewise, Earth’s apparent planetary temperature in space is 255K, from its mean flux of 239 W/m2. In both of these cases, the planetary output is assumed to match its input (from the Sun), so one ‘simple’ method one could use to derive the apparent temperature of a planet is by taking the TSI (“solar constant”) at the planet’s (or moon’s) particular distance from the Sun, and multiply it with 1 – α, its estimated global (Bond) albedo, a number that’s always <1, finally dividing by 4 to cover the whole spherical surface. Determining the average global albedo is clearly the main challenge when going by this method. The most common value provided for Venus is 0.75, for Earth 0.296.

But does the resulting value really say anything about the actual planetary temperature? If the planet absorbs a mean radiant flux (net SW) below its ToA, then how this flux affects the overall system temperature very much depends on the system’s total bulk heat capacity. If it is large, the flux will have little effect, if it’s small, the flux will have a bigger effect.

Continue reading

‘To heat a planetary surface’ for dummies; Part 5a

In 1938, English steam technologist Guy Stewart Callendar wrote what proved to be a seminal – one might even venture to call it the foundational – paper of the entire modern AGW pipe dream movement, with its rather determined effort at postulating what we today call the “Radiative (Atmospheric) Greenhouse Effect” (rGHE), or as some people would prefer it: the “Callendar Effect”.

In his paper – “The Artificial Production of Carbon Dioxide and Its Influence on Temperature” – Callendar argued that the increase in global atmospheric CO2  concentration due to our industrial endeavours would (and did) warm the world because of the alleged augmenting influence of this IR-active molecule on the so-called “sky radiation” (what we today call “(atmospheric) downwelling longwave radiation” (DLR, DWLWIR), more commonly known simply as “back radiation”):

“Few of those familiar with the natural heat exchanges of the atmosphere, which go into the making of our climates and weather, would be prepared to admit that the activities of man could have any influence upon phenomena of so vast a scale.

In the following paper I hope to show that such influence is not only possible, but is actually occurring at the present time.”

Notice here how Callendar was well aware that with his hypothesis, he was challenging a generally accepted scientific paradigm of his time, one which held that our climate and weather are natural phenomena with purely natural drivers, which can not in any meaningful way be influenced (globally, at least) by human activity.

Callendar claimed that it can. And that it does. He even went so far as to claim he could show it …

Well, then; by all means bring it on! To quote Carl Sagan:

“Extraordinary claims require extraordinary evidence.”

Continue reading

The “enhanced” greenhouse effect that wasn’t

Update (March 24th) at the end of this post – a kind of response from Feldman.



There was much ado recently about a new paper published in ‘Nature’ (“Observational determination of surface radiative forcing by CO2 from 2000 to 2010″ by Feldman et al.) claiming to have observed a strengthening in CO2-specific “surface radiative forcing” at two sites in North America going from 2000 to the end of 2010 (a period of 11 years) of about 0.2 W/m2 per decade, and through this observation further claiming how they have shown empirically (allegedly for the first time outside the laboratory) how the rise in atmospheric CO2 concentration directly and positively affects the surface energy balance, by adding more and more energy to it as “back radiation” (“downwelling longwave (infrared) radiation” (DWLWIR)), thus – by implication – leading to surface warming.

In other words, Feldman et al. claim to have obtained direct empirical evidence – from the field – of a strengthening of the “greenhouse effect”, a result, it would seem, lending considerable support to the hypothesis that our industrial emissions of CO2 and other similar gaseous substances to the atmosphere has enhanced, and is indeed enhancing still, the Earth’s atmospheric rGHE, thus causing a warming global surface – the AGW proposition.

From the abstract:

(…) we present observationally based evidence of clear-sky CO2 surface radiative forcing that is directly attributable to the increase, between 2000 and 2010, of 22 parts per million atmospheric CO2.”

And,

“These results confirm theoretical predictions of the atmospheric greenhouse effect due to anthropogenic emissions, and provide empirical evidence of how rising CO2 levels (…) are affecting the surface energy balance.”

So the question is: Do these results really “confirm theoretical predictions of the atmospheric greenhouse effect due to anthropogenic emissions”?

Of course they don’t. As usual, the warmists refuse to look at the whole picture, insisting rather on staying inside the tightly confined space of their own little bubble model world. Continue reading

‘Noise + Trend’?

Judith Curry just recently asked the following question in her blog post “The 50-50 argument”:

“So, how to sort this out and do a more realistic job of detecting climate change and (…) attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard.”

There is a very simple way of doing this that people at large still seem to be absolutely blind to. To echo the words of ‘Statistician to the Stars!’ William M. Briggs: “Just look at the data!” You have to do it in detail. Both temporally and spatially. I have done this already here, here and here + a summary of the first three here. In this post I plan to highlight even more clearly the difference between what an anthropogenic (‘CO2 forcing’) signal would and should look like and a signal pointing to natural processes.

Curry has many sensible points. She says among other things:

“Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. (…)

The glaring flaw in their logic is this.  If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

And Continue reading

How the world really warmed between the 70s and the 00s, Part I

It bores me to death ceaselessly having to argue against assertive warmist claims about effects, simply presupposed as real, but – unfailingly – never supported by observational evidence from the real world, of altogether hypothetical mechanisms whereby increasing amounts of CO2 in the atmosphere is said to somehow warm the surface of the earth by radiative means.

It is the perfect circular argument. The perfect corruption of the scientific method. They don’t have to find and show at all that their claimed mechanism is working as postulated, because they already know it does. In advance. It’s there. Behind ‘the natural noise’.

‘Discussing’ this topic with the warmists, on their preset terms, from their compulsively linear (that is, CO2-bound) world perspective, thus gives as much meaning as arguing about the biological link between unicorns and horses. Continue reading