Why “GISTEMP LOTI global mean” is wrong and “UAHv6 tlt gl” is right

Ten days ago, Nick Stokes wrote a post on his “moyhu” blog where he – in his regular, guileful manner – tries his best to distract from the pretty obvious fact (pointed out in this recent post of mine) that GISS poleward of ~55 degrees of latitude, most notably in the Arctic, basically use land data only, effectively rendering their “GISTEMP LOTI global mean” product a bogus record of actual global surface temps.

Among other things, he says:

“The SST products OI V2 and ERSST, used by GISS then and now, adopted the somewhat annoying custom of entering the SST under sea ice as -1.8°C. They did this right up to the North Pole. But the N Pole does not have a climate at a steady -1.8°C. GISS treats this -1.8 as NA data and uses alternative, land-based measure. It’s true that the extrapolation required can be over long distances. But there is a basis for it – using -1.8 for climate has none, and is clearly wrong.

So is GISS “deleting data”? Of course not. No-one actually measured -1.8°C there. It is the standard freezing point of sea water. I guess that is data in a way, but it isn’t SST data measured for the Arctic Sea.”

The -1.8°C averaging bit is actually a fair and interesting point in itself, but this is what Stokes does; he finds a peripheral detail somehow related to the actual argument being made and proceeds to misrepresent its significance in an attempt to divert people’s attention from the real issue at hand. The real issue in this case of course being GISS’s (bad) habit of smearing anomaly values from a small collection of land data points all across the vast polar cap regions, over wide tracts of land (where for the main part we don’t have any data), over expansive stretches of ocean (where we do have SST data readily available) AND over complex regions affected by sea ice (where we indeed do have data (SSTs, once again) when and where there isn’t any sea ice cover, but none whatsoever when there is), all the way down to 55-60 degrees of latitude. Continue reading

The “Climate Sensitivity” folly

The Lewis & Curry paper of 2014, where they set out to estimate Earth’s climate sensitivity to “GHGs” apparently ‘based on observations’, neatly identifies the fundamental problem with the whole “climate sensitivity” issue:

It is not a scientific proposition. It starts out as a speculation, a mere conjecture, and ends with a circular argument based on that very conjecture.

The conjecture of course being:

“More CO2 in the atmosphere can, will and does cause a net warming of the global surface of the Earth.”

This is the basic premise behind the entire AGW industry. The one thing that HAS TO be correct in order for all the other claims made to even stand a chance of being taken seriously in a proper scientific context.

But has this basic premise ever, anywhere, by anyone, been verified empirically through consistent observations from the real Earth system?

Of course not! Not even remotely so!

It is still nothing but a loose conjecture …

And yet NO ONE seems to acknowledge even in the slightest how this might pose a problem. All you get if you bring it up are shrugs of indifference and/or tuts of disapproval. ‘Go away, we’re discussing real, important issues here!’

The irony … Continue reading

HadCRUt3 vs. ERA Interim

Climate (or global atmospheric) reanalyses are an alternative way to assess how the global climate evolves over time, a blend of model and observation. They tend to include a multitude of variables, but I would like to focus on the one specifically pertaining to our recent discussion about GISTEMP vs. HadCRUt3: global temperatures.

There’s a host of different climate reanalyses around; among the most reputable ones, though, are those conducted by the American agencies NCEP (NOAA) and NCAR, the Japanese JMA, and the European ECMWF.

So, what is a climate reanalysis?

ECMWF explains:

“A climate reanalysis gives a numerical description of the recent climate, produced by combining models with observations. It contains estimates of atmospheric parameters such as air temperature, pressure and wind at different altitudes, and surface parameters such as rainfall, soil moisture content, and sea-surface temperature. (…)

ECMWF periodically uses its forecast models and data assimilation systems to ‘reanalyse’ archived observations, creating global data sets describing the recent history of the atmosphere, land surface, and oceans. Reanalysis data are used for monitoring climate change, for research and education, and for commercial applications.

Current research in reanalysis at ECMWF focuses on the development of consistent reanalyses of the coupled climate system, including atmosphere, land surface, ocean, sea ice, and the carbon cycle, extending back as far as a century or more. The work involves collection, preparation and assessment of climate observations, ranging from early in-situ surface observations made by meteorological observers to modern high-resolution satellite data sets. Special developments in data assimilation are needed to ensure the best possible temporal consistency of the reanalyses, which can be adversely affected by biases in models and observations, and by the ever-changing observing system.”

Continue reading


Happy New Year to everyone!

There is a very good reason why the trend and general progression of tropospheric temp anomalies since 2000, as rendered by the new UAH.v6 dataset, are most likely correct. (Read this post to understand why it was necessary for UAH to update their tlt product from its version 5.6 in the first place.)

The reason is that they both match to near perfection the trends and general progression of incoming and outgoing radiation flux anomalies, as rendered by the CERES EBAF ToA Ed2.8 dataset, over that same period. They’re all flat …:


Figure 1. Incoming radiant heat (ASR, “absorbed solar radiation”) (gold) vs. outgoing radiant heat (OLR, “outgoing longwave radiation”) (red) at the global ToA, from March 2000 to July 2015. Continue reading