How the IPCC turn calculated numbers into heat

‘Climate ScienceTM’ (represented and promoted by the IPCC) has so corrupted ordinary people’s way of thinking, that in order to demonstrate why there is no ‘atmospheric radiative greenhouse effect’ (rGHE), you have to start all the way from scratch. You have to step completely outside the framework of their concocted ‘mental model’ within which they shape their arguments.

‘Climate ScienceTM’ is afflicted with a dual case of monomania, two major fixations that they cannot and will not drop under any circumstances.

The first one is a complete linear trend line mania. They are unable to look at a data time series and not mentally project one onto it. The data – and especially the variation in it – basically doesn’t matter. Only the straight trend line plastered across it, from the one end to the other, does.

The second one, of direct relevance to this post, is their peculiar obsession with radiative flux intensities and their perceived direct correlation with the surface temperature of objects, expressed by the purely radiative Stefan-Boltzmann relationship. They clearly misinterpret and hence stretch the applicability of this law in the real world far beyond its actual justified range of operation, but absolutely refuse to recognise it. They worship (and use) it as sanctified truth.

Basically, they see the world in terms of radiation first and last. Everything in their world is in the end determined and controlled by thermal radiation. When it comes down to it, according to the warmists, you can simply scrape away everything else and just look at instantaneous radiative emission fluxes and directly know surface temperatures. As if we all lived in Max Planck’s conceptually pure radiative universe.

‘Climate ScienceTM’ thinks (or promotes the idea) that the temperature of any object – even real-world objects on Earth – is determined strictly by its radiative energy output (its emission flux), likewise that this final temperature is known and fixed even from the onset of heating, simply by the instantaneous intensity of its radiative energy input (the absorbed flux) minus convective loss (!).

In other words, if you only know the total (added) intensity of the instantaneous radiative energy flux input to the surface of an object and you are at the same time able to determine its energy loss through convection per unit of time, you will be able to tell its final temperature, no actual thermo-measurement required. Or, turn it around, if you know the temperature of an object, you instantly know the intensity of its radiative energy output, regardless of any simultaneous convective loss of energy.

(Well, you also need to know its surface emissivity/absorptivity, but according to ‘Climate ScienceTM’ most relevant real-world materials (like soil, rock, water, vegetation) possess emissivities close to unity anyway, and so can be approximated as (convecting) black bodies …!) Continue reading

‘Noise + Trend’?

Judith Curry just recently asked the following question in her blog post “The 50-50 argument”:

“So, how to sort this out and do a more realistic job of detecting climate change and (…) attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard.”

There is a very simple way of doing this that people at large still seem to be absolutely blind to. To echo the words of ‘Statistician to the Stars!’ William M. Briggs: “Just look at the data!” You have to do it in detail. Both temporally and spatially. I have done this already here, here and here + a summary of the first three here. In this post I plan to highlight even more clearly the difference between what an anthropogenic (‘CO2 forcing’) signal would and should look like and a signal pointing to natural processes.

Curry has many sensible points. She says among other things:

“Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. (…)

The glaring flaw in their logic is this.  If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

And Continue reading

Why ‘atmospheric radiative GH warming’ is a chimaera

Science of Doom (SoD) has apparently issued a challenge of some sort to a commenter going by the name of ‘Bryan’. This is how SoD describes Bryan:

“Bryan needs no introduction on this blog, but if we were to introduce him it would be as the fearless champion of Gerlich and Tscheuschner.”

And the challenge appears to be a return to the ‘Steel Greenhouse’, a setup that is meant to convey in the simplest possible way the basic mechanism behind ‘atmospheric radiative greenhouse warming’ of the surface of the Earth.

The challenge goes as follows: Continue reading

On Heat, the Laws of Thermodynamics and the Atmospheric Warming Effect

On average, Earth’s solar-heated global surface is warmer than the Moon’s by as much as 90 degrees Celsius! This is in spite of the fact that the mean solar flux – evened out globally and across the diurnal cycle – absorbed by the latter is almost 80% more intense than the one absorbed by the former.

The Earth’s global surface, absorbing on average 165 W/m2 from the Sun, has a mean temperature of ~288K (+15°C).

The Moon’s global surface, absorbing on average 295 W/m2 from the Sun, has a mean temperature of >200K (-75°C).

A pure solar radiative equilibrium for each of the two bodies (according to the Stefan-Boltzmann equation: Q = σT4, assuming emissivity (ε) = 1) would provide them with maximum steady-state mean global temps of 232K (-41°C) and 269K (-4°C) respectively.

As you can well gather from this, the Earth’s surface is 56 degrees warmer than its ideal solar radiative equilibrium temperature, while the lunar surface is at least 70 degrees colder than its ideal solar radiative equilibrium temperature. That’s a spread of no less than 126 degrees! On average …

Still, these two celestial bodies are at exactly the same distance from the Sun: 1AU.

So what could possibly account for this astounding difference between such close neighbours?

Very simple: The Earth has an atmosphere. The Moon doesn’t. Continue reading