Tamino’s radiosonde problem, Part 1

RSS vs. RATPAC tamino

Figure 1. Original found here: https://tamino.wordpress.com/2015/12/11/ted-cruz-just-plain-wrong/

A good month ago, the perennially unsavoury character calling himself Tamino once again tried to hold up the spotty “global” network of radiosondes (weather balloons) as somehow a better gauge of the progression and trend of tropospheric temperature anomalies over the last 37 years than the satellites, by virtue of being essentially – as he would glibly put it – “thermometers in the sky”.

So his simple take on the glaring “drift” between current surface records and the satellites over the last 10-12 years is this: The surface records are right and the satellites are wrong. Why? Because the surface records agree with the radiosondes while the satellites don’t! The radiosondes implicitly – in his world – representing “Troposphere Truth”.

And so, when your starting premise goes like this: the radiosondes = thermometers in the sky = troposphere truth, then any “drift” observed between them and the satellites (as in Fig.1 above) will – by default – be interpreted by you as a problem with the latter.

To repeat Tamino’s fairly simplistic reasoning, then, in the form of some sort of logical-sounding argument: Surface and satellites don’t agree. Radiosondes and satellites don’t agree. But surface and radiosondes do agree. Which means the latter two are right, their agreement robustly verifying the ‘rightness’ of each. (And also, the radiosondes represent “Troposphere Truth”.) Which leaves the satellites out in the cold …

There is, however, a definite issue to be had with this line of argument.

It doesn’t hold up to scrutiny … Continue reading

Update on the relationship between the NINO3.4 and global SSTa

More than fifteen months ago I wrote the post “What of the Pause?”, where I tried to analyse the state of the global climate with a special focus on the interesting developments following the 2011/12 La Niña. I have also later discussed that particular time period here.

I have earlier pointed out the close connection between the SSTa in that central-eastern part of the narrow Pacific equatorial zone called “NINO3.4” and “global” SSTa over decadal time frames, how the former consistently seems to lead the latter in a tightknit relationship, firmly constraining the progression of global mean anomalies through time – flat (though with much noise) as long as the NINO3.4 signal remains strong enough to override (and/or control) all other regional signals around the globe, which most of the time it does.

I have then proceeded to show how “global warming” (or “global cooling”) only appears to come about at times when the influence of this tight relationship on the global climate is somehow offset by surface processes elsewhere, meaning outside the NINO3.4 region. This obviously doesn’t happen too often, because it would take a very powerful and persistent process to disrupt and even break the sturdy grip of the NINO3.4 region on the leash with which it controls the generally flat progression of global mean temps over time.

In fact, from 1970 to 2013 it evidently only happened three times. Which means that within these three instances of abrupt extra-NINO surface heat is contained the entire “global warming” between those years. Before, between and after, global temp anomalies obediently follow NINO3.4 in a generally (though pretty noisy) horizontal direction; no intervening gradual upward (or downward) divergence whatsoever.

With the year 2015 completed, I felt an update of this NINO3.4-global SSTa relationship was in order. Is there evidence of a new step as of late …?

My answer to this can only be: ‘It is still too early to tell.’ But interesting things have happened – and are indeed still happening – over the last two to three years, since about mid 2013:

NINO vs. gl

Figure 1.

Continue reading

Why “GISTEMP LOTI global mean” is wrong and “UAHv6 tlt gl” is right

Ten days ago, Nick Stokes wrote a post on his “moyhu” blog where he – in his regular, guileful manner – tries his best to distract from the pretty obvious fact (pointed out in this recent post of mine) that GISS poleward of ~55 degrees of latitude, most notably in the Arctic, basically use land data only, effectively rendering their “GISTEMP LOTI global mean” product a bogus record of actual global surface temps.

Among other things, he says:

“The SST products OI V2 and ERSST, used by GISS then and now, adopted the somewhat annoying custom of entering the SST under sea ice as -1.8°C. They did this right up to the North Pole. But the N Pole does not have a climate at a steady -1.8°C. GISS treats this -1.8 as NA data and uses alternative, land-based measure. It’s true that the extrapolation required can be over long distances. But there is a basis for it – using -1.8 for climate has none, and is clearly wrong.

So is GISS “deleting data”? Of course not. No-one actually measured -1.8°C there. It is the standard freezing point of sea water. I guess that is data in a way, but it isn’t SST data measured for the Arctic Sea.”

The -1.8°C averaging bit is actually a fair and interesting point in itself, but this is what Stokes does; he finds a peripheral detail somehow related to the actual argument being made and proceeds to misrepresent its significance in an attempt to divert people’s attention from the real issue at hand. The real issue in this case of course being GISS’s (bad) habit of smearing anomaly values from a small collection of land data points all across the vast polar cap regions, over wide tracts of land (where for the main part we don’t have any data), over expansive stretches of ocean (where we do have SST data readily available) AND over complex regions affected by sea ice (where we indeed do have data (SSTs, once again) when and where there isn’t any sea ice cover, but none whatsoever when there is), all the way down to 55-60 degrees of latitude. Continue reading

The “Climate Sensitivity” folly

The Lewis & Curry paper of 2014, where they set out to estimate Earth’s climate sensitivity to “GHGs” apparently ‘based on observations’, neatly identifies the fundamental problem with the whole “climate sensitivity” issue:

It is not a scientific proposition. It starts out as a speculation, a mere conjecture, and ends with a circular argument based on that very conjecture.

The conjecture of course being:

“More CO2 in the atmosphere can, will and does cause a net warming of the global surface of the Earth.”

This is the basic premise behind the entire AGW industry. The one thing that HAS TO be correct in order for all the other claims made to even stand a chance of being taken seriously in a proper scientific context.

But has this basic premise ever, anywhere, by anyone, been verified empirically through consistent observations from the real Earth system?

Of course not! Not even remotely so!

It is still nothing but a loose conjecture …

And yet NO ONE seems to acknowledge even in the slightest how this might pose a problem. All you get if you bring it up are shrugs of indifference and/or tuts of disapproval. ‘Go away, we’re discussing real, important issues here!’

The irony … Continue reading

HadCRUt3 vs. ERA Interim

Climate (or global atmospheric) reanalyses are an alternative way to assess how the global climate evolves over time, a blend of model and observation. They tend to include a multitude of variables, but I would like to focus on the one specifically pertaining to our recent discussion about GISTEMP vs. HadCRUt3: global temperatures.

There’s a host of different climate reanalyses around; among the most reputable ones, though, are those conducted by the American agencies NCEP (NOAA) and NCAR, the Japanese JMA, and the European ECMWF.


So, what is a climate reanalysis?

ECMWF explains:

“A climate reanalysis gives a numerical description of the recent climate, produced by combining models with observations. It contains estimates of atmospheric parameters such as air temperature, pressure and wind at different altitudes, and surface parameters such as rainfall, soil moisture content, and sea-surface temperature. (…)

ECMWF periodically uses its forecast models and data assimilation systems to ‘reanalyse’ archived observations, creating global data sets describing the recent history of the atmosphere, land surface, and oceans. Reanalysis data are used for monitoring climate change, for research and education, and for commercial applications.

Current research in reanalysis at ECMWF focuses on the development of consistent reanalyses of the coupled climate system, including atmosphere, land surface, ocean, sea ice, and the carbon cycle, extending back as far as a century or more. The work involves collection, preparation and assessment of climate observations, ranging from early in-situ surface observations made by meteorological observers to modern high-resolution satellite data sets. Special developments in data assimilation are needed to ensure the best possible temporal consistency of the reanalyses, which can be adversely affected by biases in models and observations, and by the ever-changing observing system.”

Continue reading

UAH v6 vs. CERES EBAF ToA

Happy New Year to everyone!

There is a very good reason why the trend and general progression of tropospheric temp anomalies since 2000, as rendered by the new UAH.v6 dataset, are most likely correct. (Read this post to understand why it was necessary for UAH to update their tlt product from its version 5.6 in the first place.)

The reason is that they both match to near perfection the trends and general progression of incoming and outgoing radiation flux anomalies, as rendered by the CERES EBAF ToA Ed2.8 dataset, over that same period. They’re all flat …:

ASR vs. OLR

Figure 1. Incoming radiant heat (ASR, “absorbed solar radiation”) (gold) vs. outgoing radiant heat (OLR, “outgoing longwave radiation”) (red) at the global ToA, from March 2000 to July 2015. Continue reading

Why “GISTEMP LOTI global mean” is wrong and “HadCRUt3 gl” is right

Two renditions of global surface (land+ocean) temperature anomaly evolution since 1970:

compress-2 (4)

Figure 1.

The upper red curve represents the final 46 years of the temperature record most frequently presented to (and therefore most often seen by) the general public: NASA’s official “GISTEMP LOTI global mean” product. There is hardly any “pause” in ‘global warming’ post 1997 to be spotted in this particular time series. It is the one predictably trotted out whenever an AGW ‘doom and gloom’ activist sees the need to ‘prove’ to a sceptic that “global warming” indeed continues unabatedly and rub his face in it.

The lower curve in Fig. 1 is an altogether unofficial one. However, it should still be fairly familiar to most. It is the one having been consistently used by me on this blog to represent actual global surface temperature anomalies since ~1970. It is time to explain (and to show) why …

This particular curve is simply the now defunct UEA/UKMO land+ocean product “HadCRUt3 gl” with an en bloc downward adjustment of 0.064 degrees included from January 1998*. The “Pause” is here vividly seen as but one (albeit an extended one) of several plateaus in an upward, distinctly steplike progression of global temps since the 70s.

* I discussed here why this is a necessary adjustment.

Now, which one of these two renditions is more honest in its attempt to depict the actual “reality” of things? And which one is the result of simply inventing extra warming?

Let’s have a look.

The following analysis uses data acquired from KNMI Climate Explorer and WfT.


I will draw your attention to a remarkable circumstance. Continue reading