First the SW (that’s measured reflected SW at the top of the atmosphere (ToA), basically an expression of Earth’s albedo). TSI (incoming sunlight) at the ToA minus reflected SW (albedo) at the ToA equals the ASR (“absorbed solar radiation”) at the ToA, the actual radiant HEAT (net SW) transferred from the Sun to the Earth system as a whole: (ERBS Ed3 + CERES EBAF Ed2.8 vs. ISCCP FD; tropics, 1985-2004 (20 years).) Continue reading →
Figure 1. Original found here: https://tamino.wordpress.com/2015/12/11/ted-cruz-just-plain-wrong/
A good month ago, the perennially unsavoury character calling himself Tamino once again tried to hold up the spotty “global” network of radiosondes (weather balloons) as somehow a better gauge of the progression and trend of tropospheric temperature anomalies over the last 37 years than the satellites, by virtue of being essentially – as he would glibly put it – “thermometers in the sky”.
So his simple take on the glaring “drift” between current surface records and the satellites over the last 10-12 years is this: The surface records are right and the satellites are wrong. Why? Because the surface records agree with the radiosondes while the satellites don’t! The radiosondes implicitly – in his world – representing “Troposphere Truth”.
And so, when your starting premise goes like this: the radiosondes = thermometers in the sky = troposphere truth, then any “drift” observed between them and the satellites (as in Fig.1 above) will – by default – be interpreted by you as a problem with the latter.
To repeat Tamino’s fairly simplistic reasoning, then, in the form of some sort of logical-sounding argument: Surface and satellites don’t agree. Radiosondes and satellites don’t agree. But surface and radiosondes do agree. Which means the latter two are right, their agreement robustly verifying the ‘rightness’ of each. (And also, the radiosondes represent “Troposphere Truth”.) Which leaves the satellites out in the cold …
There is, however, a definite issue to be had with this line of argument.
More than fifteen months ago I wrote the post “What of the Pause?”, where I tried to analyse the state of the global climate with a special focus on the interesting developments following the 2011/12 La Niña. I have also later discussed that particular time period here.
I have earlier pointed out the close connection between the SSTa in that central-eastern part of the narrow Pacific equatorial zone called “NINO3.4” and “global” SSTa over decadal time frames, how the former consistently seems to lead the latter in a tightknit relationship, firmly constraining the progression of global mean anomalies through time – flat (though with much noise) as long as the NINO3.4 signal remains strong enough to override (and/or control) all other regional signals around the globe, which most of the time it does.
I have then proceeded to show how “global warming” (or “global cooling”) only appears to come about at times when the influence of this tight relationship on the global climate is somehow offset by surface processes elsewhere, meaning outside the NINO3.4 region. This obviously doesn’t happen too often, because it would take a very powerful and persistent process to disrupt and even break the sturdy grip of the NINO3.4 region on the leash with which it controls the generally flat progression of global mean temps over time.
In fact, from 1970 to 2013 it evidently only happened three times. Which means that within these three instances of abrupt extra-NINO surface heat is contained the entire “global warming” between those years. Before, between and after, global temp anomalies obediently follow NINO3.4 in a generally (though pretty noisy) horizontal direction; no intervening gradual upward (or downward) divergence whatsoever.
With the year 2015 completed, I felt an update of this NINO3.4-global SSTa relationship was in order. Is there evidence of a new step as of late …?
My answer to this can only be: ‘It is still too early to tell.’ But interesting things have happened – and are indeed still happening – over the last two to three years, since about mid 2013:
Ten days ago, Nick Stokes wrote a post on his “moyhu” blog where he – in his regular, guileful manner – tries his best to distract from the pretty obvious fact (pointed out in this recent post of mine) that GISS poleward of ~55 degrees of latitude, most notably in the Arctic, basically use land data only, effectively rendering their “GISTEMP LOTI global mean” product a bogus record of actual global surface temps.
Among other things, he says:
“The SST products OI V2 and ERSST, used by GISS then and now, adopted the somewhat annoying custom of entering the SST under sea ice as -1.8°C. They did this right up to the North Pole. But the N Pole does not have a climate at a steady -1.8°C. GISS treats this -1.8 as NA data and uses alternative, land-based measure. It’s true that the extrapolation required can be over long distances. But there is a basis for it – using -1.8 for climate has none, and is clearly wrong.
So is GISS “deleting data”? Of course not. No-one actually measured -1.8°C there. It is the standard freezing point of sea water. I guess that is data in a way, but it isn’t SST data measured for the Arctic Sea.”
The -1.8°C averaging bit is actually a fair and interesting point in itself, but this is what Stokes does; he finds a peripheral detail somehow related to the actual argument being made and proceeds to misrepresent its significance in an attempt to divert people’s attention from the real issue at hand. The real issue in this case of course being GISS’s (bad) habit of smearing anomaly values from a small collection of land data points all across the vast polar cap regions, over wide tracts of land (where for the main part we don’t have any data), over expansive stretches of ocean (where we do have SST data readily available) AND over complex regions affected by sea ice (where we indeed do have data (SSTs, once again) when and where there isn’t any sea ice cover, but none whatsoever when there is), all the way down to 55-60 degrees of latitude. Continue reading →
The Lewis & Curry paper of 2014, where they set out to estimate Earth’s climate sensitivity to “GHGs” apparently ‘based on observations’, neatly identifies the fundamental problem with the whole “climate sensitivity” issue:
It is not a scientific proposition. It starts out as a speculation, a mere conjecture, and ends with a circular argument based on that very conjecture.
The conjecture of course being:
“More CO2 in the atmosphere can, will and does cause a net warming of the global surface of the Earth.”
This is the basic premise behind the entire AGW industry. The one thing that HAS TO be correct in order for all the other claims made to even stand a chance of being taken seriously in a proper scientific context.
But has this basic premise ever, anywhere, by anyone, been verified empirically through consistent observations from the real Earth system?
Of course not! Not even remotely so!
It is still nothing but a loose conjecture …
And yet NO ONE seems to acknowledge even in the slightest how this might pose a problem. All you get if you bring it up are shrugs of indifference and/or tuts of disapproval. ‘Go away, we’re discussing real, important issues here!’
Climate (or global atmospheric) reanalyses are an alternative way to assess how the global climate evolves over time, a blend of model and observation. They tend to include a multitude of variables, but I would like to focus on the one specifically pertaining to our recent discussion about GISTEMP vs. HadCRUt3: global temperatures.
There’s a host of different climate reanalyses around; among the most reputable ones, though, are those conducted by the American agencies NCEP (NOAA) and NCAR, the Japanese JMA, and the European ECMWF.
“A climate reanalysis gives a numerical description of the recent climate, produced by combining models with observations. It contains estimates of atmospheric parameters such as air temperature, pressure and wind at different altitudes, and surface parameters such as rainfall, soil moisture content, and sea-surface temperature. (…)
ECMWF periodically uses its forecast models and data assimilation systems to ‘reanalyse’ archived observations, creating global data sets describing the recent history of the atmosphere, land surface, and oceans. Reanalysis data are used for monitoring climate change, for research and education, and for commercial applications.
Current research in reanalysis at ECMWF focuses on the development of consistent reanalyses of the coupled climate system, including atmosphere, land surface, ocean, sea ice, and the carbon cycle, extending back as far as a century or more. The work involves collection, preparation and assessment of climate observations, ranging from early in-situ surface observations made by meteorological observers to modern high-resolution satellite data sets. Special developments in data assimilation are needed to ensure the best possible temporal consistency of the reanalyses, which can be adversely affected by biases in models and observations, and by the ever-changing observing system.”