Tamino’s radiosonde problem, Part 1

RSS vs. RATPAC tamino

Figure 1. Original found here: https://tamino.wordpress.com/2015/12/11/ted-cruz-just-plain-wrong/

A good month ago, the perennially unsavoury character calling himself Tamino once again tried to hold up the spotty “global” network of radiosondes (weather balloons) as somehow a better gauge of the progression and trend of tropospheric temperature anomalies over the last 37 years than the satellites, by virtue of being essentially – as he would glibly put it – “thermometers in the sky”.

So his simple take on the glaring “drift” between current surface records and the satellites over the last 10-12 years is this: The surface records are right and the satellites are wrong. Why? Because the surface records agree with the radiosondes while the satellites don’t! The radiosondes implicitly – in his world – representing “Troposphere Truth”.

And so, when your starting premise goes like this: the radiosondes = thermometers in the sky = troposphere truth, then any “drift” observed between them and the satellites (as in Fig.1 above) will – by default – be interpreted by you as a problem with the latter.

To repeat Tamino’s fairly simplistic reasoning, then, in the form of some sort of logical-sounding argument: Surface and satellites don’t agree. Radiosondes and satellites don’t agree. But surface and radiosondes do agree. Which means the latter two are right, their agreement robustly verifying the ‘rightness’ of each. (And also, the radiosondes represent “Troposphere Truth”.) Which leaves the satellites out in the cold …

There is, however, a definite issue to be had with this line of argument.

It doesn’t hold up to scrutiny … Continue reading

Why “GISTEMP LOTI global mean” is wrong and “UAHv6 tlt gl” is right

Ten days ago, Nick Stokes wrote a post on his “moyhu” blog where he – in his regular, guileful manner – tries his best to distract from the pretty obvious fact (pointed out in this recent post of mine) that GISS poleward of ~55 degrees of latitude, most notably in the Arctic, basically use land data only, effectively rendering their “GISTEMP LOTI global mean” product a bogus record of actual global surface temps.

Among other things, he says:

“The SST products OI V2 and ERSST, used by GISS then and now, adopted the somewhat annoying custom of entering the SST under sea ice as -1.8°C. They did this right up to the North Pole. But the N Pole does not have a climate at a steady -1.8°C. GISS treats this -1.8 as NA data and uses alternative, land-based measure. It’s true that the extrapolation required can be over long distances. But there is a basis for it – using -1.8 for climate has none, and is clearly wrong.

So is GISS “deleting data”? Of course not. No-one actually measured -1.8°C there. It is the standard freezing point of sea water. I guess that is data in a way, but it isn’t SST data measured for the Arctic Sea.”

The -1.8°C averaging bit is actually a fair and interesting point in itself, but this is what Stokes does; he finds a peripheral detail somehow related to the actual argument being made and proceeds to misrepresent its significance in an attempt to divert people’s attention from the real issue at hand. The real issue in this case of course being GISS’s (bad) habit of smearing anomaly values from a small collection of land data points all across the vast polar cap regions, over wide tracts of land (where for the main part we don’t have any data), over expansive stretches of ocean (where we do have SST data readily available) AND over complex regions affected by sea ice (where we indeed do have data (SSTs, once again) when and where there isn’t any sea ice cover, but none whatsoever when there is), all the way down to 55-60 degrees of latitude. Continue reading

Why “GISTEMP LOTI global mean” is wrong and “HadCRUt3 gl” is right

Two renditions of global surface (land+ocean) temperature anomaly evolution since 1970:

compress-2 (4)

Figure 1.

The upper red curve represents the final 46 years of the temperature record most frequently presented to (and therefore most often seen by) the general public: NASA’s official “GISTEMP LOTI global mean” product. There is hardly any “pause” in ‘global warming’ post 1997 to be spotted in this particular time series. It is the one predictably trotted out whenever an AGW ‘doom and gloom’ activist sees the need to ‘prove’ to a sceptic that “global warming” indeed continues unabatedly and rub his face in it.

The lower curve in Fig. 1 is an altogether unofficial one. However, it should still be fairly familiar to most. It is the one having been consistently used by me on this blog to represent actual global surface temperature anomalies since ~1970. It is time to explain (and to show) why …

This particular curve is simply the now defunct UEA/UKMO land+ocean product “HadCRUt3 gl” with an en bloc downward adjustment of 0.064 degrees included from January 1998*. The “Pause” is here vividly seen as but one (albeit an extended one) of several plateaus in an upward, distinctly steplike progression of global temps since the 70s.

* I discussed here why this is a necessary adjustment.

Now, which one of these two renditions is more honest in its attempt to depict the actual “reality” of things? And which one is the result of simply inventing extra warming?

Let’s have a look.

The following analysis uses data acquired from KNMI Climate Explorer and WfT.

I will draw your attention to a remarkable circumstance. Continue reading

The pressing need for ever-upward temperature adjustments … A matter of life or death to the AGW hype.

In July I wrote a blog post where a strange and very conspicuous step change indeed in global mean temps relative to the trended AMO (North Atlantic SSTa), occurring across the 8-year period of 1963-70, was pointed out:


Animation 1.

As you can clearly see, the two curves generally follow each other in remarkable style all the way from 1860 till today, except for the relatively sudden and substantial global upward shift taking place across the last half of the 60s, being firmly established by the end of 1970. After this point, the curves are back to tracking each other to an equally impressive degree as before the shift, only now with the global raised 0.25 degrees above the North Atlantic.

So why this step change? How did it occur? Continue reading

‘Noise + Trend’?

Judith Curry just recently asked the following question in her blog post “The 50-50 argument”:

“So, how to sort this out and do a more realistic job of detecting climate change and (…) attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard.”

There is a very simple way of doing this that people at large still seem to be absolutely blind to. To echo the words of ‘Statistician to the Stars!’ William M. Briggs: “Just look at the data!” You have to do it in detail. Both temporally and spatially. I have done this already here, here and here + a summary of the first three here. In this post I plan to highlight even more clearly the difference between what an anthropogenic (‘CO2 forcing’) signal would and should look like and a signal pointing to natural processes.

Curry has many sensible points. She says among other things:

“Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. (…)

The glaring flaw in their logic is this.  If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

And Continue reading

The strange 60s step change between global temps and the AMO …

Something that’s been on my mind for a while is that strange relationship – or, to be more precise, that conspicuous correlative relationship – between the evolution of North Atlantic (70N-0, 80W-0) SST anomalies (the AMO, only with trend included) and the global temperature anomalies:


Figure 1. Annual AMO (with trend imposed) vs. global temps (HadCRUt4, adjusted down 0.064 degrees post 1998) from 1860 to 2014. Continue reading