Frankfooter read this! This is why Global warming / Climate changes is so full of shit!
http://wattsupwiththat.com/2015/08/...for-the-question-is-earth-warming-or-cooling/
An analysis of BEST data for the question:
Is Earth Warming or Cooling? August 11, 2015
Guest essay by Clyde Spencer
The answer to the question is, “Yes!” Those who believe that Earth is going to Hell in a hand basket, because of anthropogenic carbon dioxide, go to extraordinary lengths to convince the public that uninterrupted warming is occurring at an unprecedented rate. One commonly reads something to the effect that the most recent year was the xth warmest year in the last n years (use your personal preferences for x and n), or that the last n years have been the warmest in the last m years. It is common for NOAA to make claims that current temperatures are higher than some previous year by an amount that is of the same order of magnitude as the uncertainty in the temperature of the year being compared to. [For an extended discussion and analysis of the veracity of these kinds of claims, go to this link:
http://www.factcheck.org/2015/04/obama-and-the-warmest-year-on-record/ ] I’d like to start off by examining the logical fallacy of the common idea that these pronouncements support the idea of continued warming. They only provide evidence for it currently being warm!
Let’s conduct a simple thought experiment that most can relate to. Imagine that you have a pot of water on the stove at room temperature. You place a thermometer in the water, take a reading, and turn on the heat. We’ll monitor the increase in temperature by taking frequent readings at fixed intervals. Assume that the thermometer is calibrated in tenths of a degree, and we’ll try to read it to the nearest ½ of a tenth. Therefore, we can expect that there will be some random errors in the reported temperature because of observation errors. If the pot is not well stirred, some stratification may occur that will further obscure the true average temperature. We can expect to see a steady, approximately linear increase in temperature until the water is pnearly at the boiling point. The pot is removed from the heat, and readings are continued as before. We can expect that the water in the pot will cool more slowly than it heated, the rate depending on such factors as the surface-to-volume ratio, the room temperature, and the material of which the pot is constructed. In any event, we can expect that the temperature readings will not change much, if any, for the first couple of readings. Subsequent readings may or may not be lower because of the random errors mentioned above. Eventually, we will get a reading that is obviously lower than when we removed the pot from the heat. A subsequent one could be slightly higher because of a reading error. If we were to stop at that point, we could make such statements as, “The last n number of readings are higher than the average of all previous temperatures, which proves that the water is still heating.” Or, “The last n readings are the highest ever recorded;” another classic, for one of the last readings which had a random error, “The probability that the last reading is higher than all other temperatures is 38%.” We know very well that the pot is no longer heating, and it is just sophistry to try to make it appear that it is.
Something I find peculiar about modern climatology is the use of so-called temperature anomalies. While not unheard of in other disciplines, there are usually good reasons, such as for simplifying a Fourier analysis of a time series. One of the issues of using anomalies is that if a published graph is reproduced, and separated from the metadata in the text of the article, then one is at a loss to know what the anomalies mean; they lose their context. Another issue is that the authors are free to choose whatever base period they want, which may not be the same as others, and it makes it difficult to compare similar analyses. The psychological impression conveyed is that (recent) data points above the baseline are extraordinary. Lastly, the use of anomalies tends to influence the subjective impression of the magnitude of changes because very small changes are scaled over the full vertical range of the graph. See Figure 2 below, which shows actual temperatures, for a comparison to the anomalies that you are used to seeing in the literature.
In the recent NOAA paper by Karl et al. (2015), the authors decided to adjust modern ocean buoy temperatures upward to agree with older, problematic, engine-room water-intake temperatures. The decision to adjust high quality data to agree with lower quality data is, at best, unorthodox. The authors did not give a good reason for the decision. As one defender remarked, whether one adds temperatures to the anomalies on the right or subtracts them on the left, the slope stays the same. True, but the result is to have a higher ending-temperature than if the more orthodox approach was taken. Supporters of anthropogenic global warming are then ‘justified’ in claiming an uninterrupted increase in recent temperatures and any claimed pause in warming is an illusion.
I take exception to the practice of conflating Sea Surface Temperatures (SST) with land air temperatures. There are several issues with this practice. While a weak excuse is, there is a strong correlation between SST and nighttime air temperatures, that is hardly justified with modern instrumentation. The biggest problem is that the heat capacity of water is so high that water exhibits strong thermal inertia. That means, warm water influenced by contact with the air will always lag colder air temperatures. Thus, even if Earth were to enter a cooling phase, water would be the last to provide evidence for it. Because the theory behind so-called ‘greenhouse warming’ predicts that the air should heat first (or more properly, cool more slowly), the most sensitive indicator of changes will be found in air temperatures. Using ocean temperatures is analogous to using subsurface land temperatures and averaging them with land air temperatures. At relatively shallow depths in the soil, the diurnal temperature changes are smoothed out and, at greater depths, even the seasonal effects are eliminated. However, we don’t average subsurface ground-temperatures with land air-temperatures! Why should we average SSTs with land air-temperatures? It is a classic example of comparing apples and oranges. SSTs are of interest and provide climate insights, but they should not be averaged with air temperatures!
Lastly, global averages of all temperature readings typically are reported instead of the high and low temperatures. This is important because the highs and lows behave differently and the lows should be a better indicator of the impact of the so-called ‘green house’ effect.
clyde-spencer-fig1
Fig 1.
Figure 1, above, which shows the differences between the high and low temperatures from the Berkeley Earth Surface Temperature (BEST) data, appears to reflect some abrupt transitions in the behaviors of the two temperatures. My interpretation of Figure 1 is that between about 1870 and 1900, neither the high nor the low temperatures were changing systematically. Then, between about 1900 and 1983, the low temperatures were increasing more rapidly than the high temperatures, causing a decline in the differences. This is what I would expect for a ‘greenhouse’ signal. However, since 1983, it appears that the high temperatures have been increasing more rapidly than the lows, resulting in a steep increase in the difference in the temperatures. I don’t believe that this has been reported before and is begging for an explanation since it isn’t something I would expect from carbon dioxide and water vapor alone.
This brings us to the point of my expanded analysis of the BEST temperature data set. Figure 2, below, shows the high and low temperatures for the period of 1870 to mid-2014. The data set starts earlier than 1870, but the uncertainty is so great in the early data that I didn’t feel it contributed much. [Should the reader be interested, there is a graph of land temperature data starting about 1750 at this link:
http://berkeleyearth.lbl.gov/regions/global-land ] The main thing worth noting is that the high temperatures were increasing rapidly in the two decades before my graphs start and the lows were coming down from a high in about 1865. The pastel shading reflects the 95% uncertainty range, which becomes imperceptible by the present day. The green, smooth line is a 6th-order polynomial fit of monthly temperature data that have been smoothed. Rather than attempt any further smoothing of the once-smoothed data, I chose to model the low-frequency response with a polynomial least-squares fit trend-line. This approach to characterizing recent temperature changes is more sophisticated than drawing straight lines through the data, where one is free to choose the start and stop times subjectively; subjective time-periods allow for conscious or unconscious mischief.
clyde-spencer-fig2
Fig. 2.
The 6th-order fit captures nearly 80% of the variance in the high-temperature data. It notably doesn’t do an optimal job of capturing the transient warming events around 1878 and 1902, or the broader warming event of the 1940s. Visually, the 6th-order fit seems to be doing a good job of characterizing the data from about 1950 to the present day, which is important for the question at hand, which is whether we are still experiencing warming. Similarly, the 6th-order fit captures more than 89% of the variance in the low-temperature data; visually, the fit appears superior to that for the high-temperature data. By comparison with a graph generated with the BEST long-term smoothed data, these regression curves are smoother than the 20-year moving average; however, they are similarly shaped. Although, the point of this exercise isn’t to smooth the data.
It is easy to take the first-derivative of a polynomial function and obtain quantitative values for the slope (tangent) of the temperature-curve versus time. That is, one can obtain annual values of the warming rate for every month for both the high and low-temperature global averages.
In order to pick up the last six months of 2014, which are missing from the 12-month smoothed data, I repeated the above analysis with the un-smoothed monthly data. There were no surprises other than the fact that the extrapolation of the last six-months of 2014-slopes for the smoothed, data were nearly identical to the slopes of the un-smoothed monthly data; the differences are trivial. I say that the results are similar for the last 6 months is surprising because all too often when one tries to extrapolate a polynomial fit beyond the actual data, the curve diverges abruptly! The polynomial coefficients are very similar for both the smoothed and un-smoothed data. The only advantage to showing the un-smoothed monthly data would be to emphasize how much noisier it is than the smoothed data. For brevity, I have omitted the additional graph. Polynomial regressions of lower order gave lower coefficients of determination (R2) and, subjectively, are poorer fits visually.
Let me summarize what the slopes tell us about the temperature records with the tables below. I’ve listed the approximate years when the highs and lows had zero slope (no warming), maximum slope (maximum warming/cooling, point of inflection on the curve), and what has been happening most recently. The slopes are in degrees Celsius change per year. Examine Figure 2 to verify what I’m saying.
High Temperatures. Year
In summary, Fig. 2 does not support a claim that 2014 had the highest high or low temperatures in modern times, and the analysis suggests we are currently in a cooling phase, not just a plateau.