And also how – in the process – it shows the new RSSv4 TLT series to be wrong and the UAHv6 TLT series to be right.
For those of you who aren’t entirely up to date with the hypothetical idea of an “(anthropogenically) enhanced GHE” (the “AGW”) and its supposed mechanism for (CO2-driven) global warming, the general principle is fairly neatly summed up here:
Figure 1. From Held and Soden, 2000 (Fig.1, p.447).
I’ve modified this diagram below somewhat, so as to clarify even further the concept of “the raised ERL (Effective Radiating Level)” – referred to as Ze in the schematic above – and how it is meant to ‘drive’ warming within the Earth system; to simply bring the message of this fundamental premise of “AGW” thinking more clearly across.
First, we have the “no forcing” (t0) scenario, where Earth’s ‘effective emission’ to space comes from lower (and thus warmer) tropospheric layers:
Then we have the “doubled CO2” (t1) scenario, where the ERL has been pushed higher up into cooler air layers closer to the tropopause:
An atmospheric greenhouse gas enables a planet to radiate at a temperature lower than the ground’s, if there is cold air aloft. It therefore causes the surface temperature in balance with a given amount of absorbed solar radiation [ASR] to be higher than would be the case if the atmosphere were transparent to IR. Adding more greenhouse gas to the atmosphere makes higher, more tenuous, formerly transparent portions of the atmosphere opaque to IR and thus increases the difference between the ground temperature and the radiating temperature. The result, once the system comes into equilibrium, is surface warming.
So when the atmosphere’s IR opacity increases with the excess input of CO2, the ERL is pushed up, and, with that, the temperature at ALL ALTITUDE-SPECIFIC LEVELS of the Earth system, from the surface (Ts) up through the troposphere (Ttropo) to the tropopause, directly connected via the so-called environmental lapse rate, i.e. the negative temperature profile rising up through the tropospheric column, is forced to do the same.
How, then, is this mechanism supposed to manifest itself?
Well, as the ERL, basically the “effective atmospheric layer of OUTWARD (upward) radiation”, the one conceptually/mathematically responsible for the All-Sky OLR flux at the ToA, and from now on, in this post, dubbed rather the EALOR, is lifted higher, into cooler layers of air, the diametrically opposite level, the “effective atmospheric layer of INWARD (downward) radiation” (EALIR), the one conceptually/mathematically responsible for the All-Sky DWLWIR ‘flux’ (or “the atmospheric back radiation”) to the surface, is simultaneously – and for the same physical reason, only inversely so – pulled down, into warmer layers of air closer to the surface. This latter concept was explained already in 1938 by G.S. Callendar:
When radiation takes place from a thick layer of gas, the average depth within that layer from which the radiation comes will depend upon the density of the gas. Thus if the density of the atmospheric carbon dioxide is altered it will alter the altitude from which the sky radiation of this gas originates. An increase of carbon dioxide will lower the mean radiation focus, and because the temperature is higher near the surface the radiation is increased, without allowing for any increased absorption by a greater total thickness of the gas.
Feldman et al., 2015, (as an example) confirm that this is still how “Mainstream Climate Science (MCS)” views this ‘phenomenon’:
Surface forcing represents a complementary, underutilized resource with which to quantify the effects of rising CO2 concentrations on downwelling longwave radiation. This quantity is distinct from stratosphere-adjusted radiative forcing at the tropopause, but both are fundamental measures of energy imbalance caused by well-mixed greenhouse gases.
The gist being that, when we make the atmosphere more opaque to IR by putting more CO2 into it, “the atmospheric back radiation” (all-sky DWLWIR at sfc) will naturally increase as a result, reducing the radiative heat loss (net LW) from the surface up. And do note, it will increase regardless of (and thus, on top of) any atmospheric rise in temperature, which would itself cause an increase. Which is to say that it will always distinctly increase also RELATIVE TO tropospheric temps (which are, by definition, altitude-specific (fixed at one particular level, like ‘the lower troposphere’ (LT))). That is, even when tropospheric temps do go up, the DWLWIR should be observed to increase systematically and significantly MORE than what we would expect from the temperature rise alone. Because the EALIR moves further down.
Conversely, at the other end, at the ToA, the EALOR moves the opposite way, up into colder layers of air, which means the all-sky OLR (the outward emission flux) should rather be observed to systematically and significantly decrease over time relative to tropospheric temps. If tropospheric temps were to go up, while the DWLWIR at the surface should be observed to go significantly more up, the OLR at the ToA should instead be observed to go significantly less up, because the warming of the troposphere would simply serve to offset the ‘cooling’ of the effective emission to space due to the rise of the EALOR into colder strata of air.
Which would lead us to expect the average OLR intensity level, if there weren’t also Earth system warming going on due to some other mechanism besides that of an “enhanced GHE”, like, say, an increase in solar heat input (+ASR), to stay relatively constant (flat) as tropospheric temps keep rising (the simultaneous effects of incrementally higher Ttropo and incrementally higher Ze effectively cancelling each other out), giving rise to an evident, gradual, yet ever-growing divergence between the two parameters. We see this basic expectation pointed out in the original caption of Fig.1 above, from Held and Soden, 2000: “Note that the effective emission temperature (Te) [after a doubling of atmospheric CO2] remains unchanged.”
Worth bearing in mind, however: We would expect to observe such gradual, ever-growing divergence between OLR and TLT in all cases, if there is in fact “GH enhancement” going on, not just in the basic case of no extrinsic warming (from, say, +ASR), i.e. flat OLR. If the OLR itself does go up, (due to e.g. solar warming), the TLT will simply end up rising faster and more (solar warming + “greenhouse warming”).
What we’re looking for, then, if indeed there is an “enhancement” of some “radiative GHE” going on in the Earth system, causing global warming, is ideally the following:
- OLR stays flat, while TLT increases significantly and systematically over time;
- TLT increases systematically over time, but DWLWIR increases even significantly more.
Effectively summed up in this simplified diagram:
However, we also expect to observe one more “greenhouse” signature.
If we expect the OLR at the ToA to stay relatively flat, but the DWLWIR at the sfc to increase significantly over time, even relative to tropospheric temps, then, if we were to compare the two (OLR and DWLWIR) directly, we’d, after all, naturally expect to see a fairly remarkable systematic rise in the latter over the former (refer to Fig.4 above).
Which means we now have our three ways to test the reality of an hypothesized “enhanced GHE” as a ‘driver’ (cause) of global warming.
The null hypothesis in this case would claim or predict that, if there is NO strenghtening “greenhouse mechanism” at work in the Earth system, we would observe:
- The general evolution (beyond short-term, non-thermal noise (like ENSO-related humidity and cloud anomalies or volcanic aerosol anomalies))* of the All-Sky OLR flux at the ToA to track that of Ttropo (e.g. TLT) over time;
- the general evolution of the All-Sky DWLWIR at the surface to track that of Ttropo (Ts + Ttropo, really) over time;
- the general evolution of the All-Sky OLR at the ToA and the All-Sky DWLWIR at the surface to track each other over time, barring short-term, non-thermal noise.
* (We see how the curve of the all-sky OLR flux at the ToA differs quite noticeably from the TLT and DWLWIR curves, especially during some of the larger thermal fluctuations (up or down), normally associated with particularly strong ENSO events. This is because there are factors other than pure mean tropospheric temperatures that affect Earth’s final emission flux to space, like the concentration and distribution (equator→poles, surface→tropopause/stratosphere) of clouds, water vapour and aerosols. These may (and do) all vary strongly in the short term, significantly disrupting the normal temperature↔flux (Stefan-Boltzmann) connection, but in the longer term, they display a remarkable tendency to even out, leaving the tropospheric temperature signal as the only real factor to consider when comparing the OLR with Ttropo (TLT). Or not. The “AGW” idea specifically contends, resting on the premise, that these other factors (and crucially also including CO2, of course) do NOT even out over time, but rather accrue in a positive (‘warming’) direction.)
The first point above we have already covered extensively. The combined ERBS+CERES OLR record is seen to track the general progression of the UAHv6 TLT series tightly, both in the tropics and near-globally, all the way from 1985 till today (the last ~33 years), as discussed at length both here and here.
Since, however, in this post we’re specifically considering the CERES era alone, this is how the global OLR matches against the global TLT since 2000:
This is simply the monthly CERES OLR flux data properly scaled (x0.266), enabling us to compare it more directly to temperatures (W/m2→K), and superimposed on the UAH TLT data. Watch how closely the two curves track each other, beyond the obvious noise. To highlight this striking state of relative congruity, we remove the main sources of visual bias in Fig.5 above. Notice, then, how the red OLR curve, after the 4-year period of fairly large ENSO-events (La Niña-El Niño-La Niña) between 2007/2008 and 2011/2012, when the cyan TLT curve goes both much lower (during the flanking La Niñas) and much higher (during the central El Niño), quickly reestablishes itself right back on top of the TLT curve, just where it used to be prior to that intermediate stretch of strong ENSO influence. And as a result, there is NO gradual divergence whatsoever to be spotted between the mean levels of these two curves, from the beginning of 2000 to the end of 2015:
This already quite definitive observation can still be further confirmed in a difference (“TLT residual”) plot:
Yes, there’s a big lift there at the end, but there’s nothing ‘gradual’ or ‘systematic’ about it. In fact, it happens to occur – abruptly and precipitously – towards the end of 2015, right before the time when the 2015/2016 El Niño is about to top out. It is very easy in such a case for your eyes to fall victim to a so-called “end-point bias”, making you think you see a genuine overall divergence. But you don’t. The particular divergence that is inevitably drawing your eyes to it, started only right there, towards the end. If we simply remove it, the plot in Fig.7 would look like this:
And poof! Gone is the sense of an upward incline …
Now, at this stage some might ask – and rightfully so – why I keep using the UAHv6 TLT dataset only in these comparisons. There are, after all, other tropospheric temperature datasets out there. Well, there is at least ONE very good reason why, and I will get to that soon enough, as it happens to be half the point of this entire post. In the meantime, please bear with me …
The second point above is just as relevant as the first one, if we want to confirm (or disconfirm) the reality of an “enhanced GHE” at work in the Earth system. We compare the tropospheric temperatures with the DWLWIRsfc ‘flux’, that is, the apparent atmospheric thermal emission to the surface:
Figure 9. Note how the scaling of the flux (W/m2) values is different close to the surface than at the ToA. Here at the DWLWIR level, down low, we divide by 5 (x0.2), while at the OLR level, up high, we divide by 3.76 (x0.266).
We once again observe a rather close match overall. At the very least, we can safely say that there is no evidence whatsoever of any gradual, systematic rise in DWLWIR over the TLT, going from 2000 to 2018. If we plot the difference between the two curves in Fig.9 to obtain the “DWLWIR residual”, this fact becomes all the more evident:
Remember now how the idea of an “enhanced GHE” requires the DWLWIR to rise significantly more than Ttropo (TLT) over time, and that its “null hypothesis” therefore postulates that such a rise should NOT be seen. Well, do we see such a rise in the plot above? Nope. Not at all. Which fits in perfectly with the impression we got at the ToA, where the TLT-curve was supposed to rise systematically up and away from the OLR-curve over time, but didn’t – no observed evidence there either of any “enhanced GHE” at work.
Let’s also, just for the fun of it (?), create a 60/40 compound Ts+Ttropo curve (60/40 in favour of the surface because the EALIR is hypothetically supposed to be situated ~1.5 km above the surface on average, while the weighted mean of the UAHv6 TLT measurements appears to lie at a level ~3.8 km above the surface, and so one could naturally infer that there should somehow be a sizeable surface signal apparent in the DWLWIR ‘flux’ as well) and compare that to the DWLWIR curve in Fig.9. This is how it would look:
Finally, the third point above is also pretty interesting. It is simply to verify whether or not the CERES EBAF Ed4 ‘radiation flux’ data products are indeed suggesting a strengthening of some radiatively defined “greenhouse mechanism”. We sort of know the answer to this already, though, from going through points 1 and 2 above. Since neither the OLR at the ToA nor the DWLWIR at the surface deviated meaningfully from the UAHv6 TLT series (the same one used to compare with both, after all), we expect rather by necessity that the two CERES ‘flux products’ also shouldn’t themselves deviate meaningfully overall from one another. And, unsurprisingly, they don’t:
Difference plot (“DWLWIR residual”):
Again, it is so easy here to allow oneself to be fooled by the visual impact of that late – obviously ENSO-related – peak, and, in this case, also a definite ENSO-based trough right at the start (you’ll plainly recognise it in Figs. 9, 11 and 12); another perfect example of how one’s perception and interpretation of a plot is directly affected by “the end-point bias”. Don’t be fooled:
Remember what was pointed out early on upthread:
If we expect the OLR at the ToA to stay relatively flat, but the DWLWIR at the sfc to increase significantly over time, even relative to tropospheric temps, then, if we were to compare the two (OLR and DWLWIR) directly, we’d […] naturally expect to see a fairly remarkable systematic rise in the latter over the former (refer to Fig.4 above).
Looking at Figs. 12 and 14, and taking into account the various ENSO states along the way, does such a “remarkable systematic rise” in DWLWIR over OLR manifest itself during the CERES era?
I’m afraid not …
UAHv6 vs. RSSv4
In what way, then, does the CERES EBAF Ed4 data provide the evidence we need to confirm that the new RSSv4 TLT series is fundamentally flawed and that the UAHv6 TLT series is much closer to the truth, thus strongly supporting my decision to use the latter one as the sole representative of tropospheric temperatures (Ttropo) against the CERES ‘radiation flux’ data?
We’ve already put the UAHv6 TLT series to the test (in the above). We will now basically let the RSSv4 TLT series go through that same procedure.
What we need to do is simply to compare the TLT data with a) the All-Sky OLR at the ToA, and b) the All-Sky DWLWIR at the sfc, to check for internal consistency.
There are four options here:
- Your TLT series is correct, and the (radiatively defined) “GHE” is strengthening.
- Your TLT series is correct, and the “GHE” is not strengthening.
- Your TLT series is incorrect, and the “GHE” is strengthening.
- Your TLT series is incorrect, and the “GHE” is not strengthening.
For 1. to be the case, we need to observe your TLT series rising significantly and systematically relative to the all-sky OLR flux at the ToA, and at the same time the all-sky DWLWIR ‘flux’ to the sfc rising significantly and systematically relative to your TLT series.
For 2. to be the case, the OLR and DWLWIR should be observed to follow the same general course over time, no systematic divergence beyond natural noise, and your TLT series will be seen to track them both quite precisely over time.
For 3. to be the case, the DWLWIR should be observed to rise sharply over time relative to the OLR, but your TLT series won’t be seen to track between the two, as it should – rising systematically faster than the OLR, but more slowly than the DWLWIR.
For 4. to be the case, the OLR and DWLWIR should be observed to follow the same general course over time, but your TLT series won’t be seen to track either.
So what’s it gonna be?
We know from before that the general path taken by the UAHv6 TLT data between 2000 and 2018 matches impressively – and reassuringly – well with the general path taken by both the CERES EBAF Ed4 All-Sky OLR at the ToA data and the all-sky DWLWIR at the sfc data, thus lending strong support to two parallel ideas at once – that there is no observable strengthening of any radiatively defined “GHE” going on in the Earth system, and that the UAHv6 TLT series is more or less correct in its rendition of tropospheric temperature (Ttropo) evolution since the millennium. Scenario 2. above appears to be the one …
But what about the RSSv4 TLT data?
We first compare it with the all-sky OLR data, the radiative flux moving up and out of the Earth system to space through the top of the atmosphere (ToA):
And the difference between the two curves in Fig.15 plotted:
What do we see?
Even discounting those last 2+ years (from late 2015 to early 2018), we see a fairly obvious gradual and steady increase over time in TLT over OLR, from the relatively low average level of the first 4-5 years to the relatively high average level of the 3-4 years connecting the trough of the 2011/2012 La Niña and the peak of the 2015/2016 El Niño.
In other words, if the RSSv4 TLT series is correct, the two plots above are both clearly indicating a strengthening “greenhouse mechanism”, as radiatively defined, at work in the Earth system.
However, if this is indeed to be the case, there should also be an equally obvious radiative signature to be observed at the opposite end, down towards the surface. And here the situation should be inverted – TLT minus DWLWIR should slope systematically down, not up. Or, if we want to be consistent, comparing directly with the “DWLWIR residual” using the UAHv6 TLT series upthread, we would expect DWLWIR minus TLT to slope systematically up, which is basically saying the same thing, only the other way around.
Well, here’s RSSv4 TLT vs. CERES EBAF Ed4 All-Sky DWLWIR at the sfc:
And the difference plot to go along – DWLWIR minus TLT. Remember, according to “AGW theory”, it should go consistently and significantly UP:
That’s the exact OPPOSITE of what we’d expect! If what was indicated in those ToA plots above were actually true. If the RSSv4 TLT series were actually correct.
There is no internal consistency to be found here …
IOW: The RSSv4 TLT series easily fails this simple test.
You can’t have it both ways. You can’t both have your TLT data increase significantly more over time than the ToA OLR data, and at the same time have it increase similarly more over time than the Sfc DWLWIR data. It makes absolutely no physical sense!
So, for the RSSv4 TLT series, scenario 4. above is the one that clearly portrays the real situation. Your TLT series is incorrect. (And there is no strengthening of a radiatively defined “GHE” going on in the Earth system.)