This Amuses Me

Some of you may have read the latest post at Climate Audit in which it’s shown the lastest IPCC report exaggerated concerns about undernourishment and its relation to global warming. I wasn’t going to respond to it, but I saw this comment by Richard Drake:

All you’re reporting from the SOD to final (h/t HaroldW also) is truly astonishing. Kudos to Richard Tol but woe to the lead authors and a system that seems to turn a blind eye to such abse. The result being that the world’s undernourished are used by the IPCC to support the unsupportable, “social and economic issues” be damned.

Which is too funny to pass up. Richard Tol abused the IPCC process to a huge degree, yet here, he’s being praised for his stand against “the lead authors and a system that seems to turn a blind eye to such ab[u]se.”

People who have followed know what I’m talking about. Several months ago, I showed Richard Tol rewrote a section of the latest IPCC report to weaken the concerns expressed in it and add in promotion for his own work. I also showed he wrote a completely new section which depended entirely upon his own work, heavily promoting his own work and views. I also showed at least some of Tol’s additions depended entirely upon outdated work.

Finally, as Steve McIntyre comments:

It’s also instructive to see the changes from the Second Draft to the Final Draft in the very first paragraph.

The Second Draft (the one sent to external reviewers) said that other “social and economic issues” were the “main drivers” in food security:

In the Final Draft, the important association of food security to “social and economic issues” was deleted, as well as the conclusion that the contribution of climate change to the price increases was likely small, replacing the language with much more alarmist language:

In other words, although IPCC trumpets its review process, these important changes were not passed by external reviewers, but made by chapter authors.

All of the changes I’m referring to were made after the Second Order Draft had already been reviewed, meaning none of them were “passed by external reviewers.”

An additional similarity is Richard Tol made dramatic changes to promote outdated results. The section he wrote from scratch said:

Estimates agree on the size of the impact (small relative to economic growth) but disagree on the sign (Figure 10-1). Climate change may be beneficial for moderate climate change but turn negative for greater warming. Impacts worsen for larger warming, and estimates diverge.

But this was based entirely upon estimates from two papers which showed any net benefit from climate change (values given as percent):

(Mendelsohn et al. 2000)a 		0.1 
(Tol 2002) 				2.3

It’s difficult to see how an estimate of 0.1% could be taken as indicating anything. Regardless, the most recent of these estimates was published in 2002, over a decade ago. McIntyre concludes in his post:

Rather than using up-to-date FAO data showing a steady decline in undernourishment during a period of increasing temperatures (which they either were aware of or ought to have been aware of), the IPCC chose to feature an increase in an obsolete data set…. And, in particular, why did IPCC highlight a supposed increase in “provisional” data (more precisely now long obsolete data) when the increase changed to a decrease in the up-to-date version of the data?

The source used in the section McIntyre discusses was from 2008. Not only did Tol depend upon even older results, his results were obsolete. This is a graph showing his old results, indicating there could be positive effects for up to ~2C of warming:

8-11-Tol-Original

This is an updated version of the graph:

8-11-Tol-Updated

This graph shows no net benefits from global warming. That’s a huge change. It completely undermines an important conclusion Richard Tol added into the IPCC report outside the normal review process.

Unlike in McIntyre’s case, this graph was published only after Tol made his additions to the the IPCC report. However, that difference is made less important as the graph was published because of concerns Bob Ward first raised in October of 2013. Tol repeatedly refused to admit having made any mistake, insulting anyone who said he had. This resistance delayed the correction of his work until after he was able to modify the IPCC report to favor his obsolete conclusions.


So lets review. Richard Tol rewrote one section of the IPCC report and wrote an entirely new section from scratch in order to downplay concerns about global warming (and heavily promote his own work). He made these changes only after all external review. Some of these changes were dependent entirely upon his own, obsolete work.

It’s difficult to see any substantial differences between what Richard Tol did and what Steve McIntyre highlights which don’t make Tol’s actions worse. As such, I have to laugh when I see:

Kudos to Richard Tol but woe to the lead authors and a system that seems to turn a blind eye to such abse.

Unknown authors were criticized for their abuse of the IPCC process. Richard Tol did almost the exact same thing, except maybe worse, and nobody seems to mind. In fact, people praise him!

It’s too funny.

Advertisements

62 comments

  1. The fitted curve totally depends on the weighting of the value at 1 degree. I consider the beneficial effects for slight warming extremely underestimated by the prophets of doom.

  2. Brandon, net positive benefit isn’t excluded in your second graph at the 95% CL. So it doesn’t actually “completely undermine” anything, even if we were to take these CLs at face value, which we probably shouldn’t.

    The lower bound looks totally bogus though. I’m not sure where it came from, but it looks wrong. The centroid looks too low to compared to the data points. Actually the whole graph looks odd.

    More importantly, the question of whether there is a slight amount of benefit or a slightly harm to global economic activity for a modest amount of warming is largely irrelevant from a policy perspective. The only time a model outcome matters is when it forces a change in “business as usual”, and that is only going to happen when there is a clear economic advantage to e.g., amelioration over adaptation.

    IMO, the real question we need to be asking at what negative percent of impact are the risks of market interference outweighed by their putative benefits.

  3. Hans Erren, if it helps, the regression in those graphs is just a linear + quadratic fit. The outlier at one degree forced the linear component to be positive when there were fewer data points. When more data points were added, that outlier wound up being given less weight.

    Ironically, that outlier causes another issue highlighted by Carrick:

    The lower bound looks totally bogus though. I’m not sure where it came from, but it looks wrong. The centroid looks too low to compared to the data points. Actually the whole graph looks odd.

    Because the outlier forced the linear component up, the quadratic component had to compensate by going down more sharply. When the outlier stopped having that much weight, the linear component became negative, thus forcing the quadratic component to go down less sharply. That shows the inconsistent nature of this sort of regression.

    Carrick:

    Brandon, net positive benefit isn’t excluded in your second graph at the 95% CL. So it doesn’t actually “completely undermine” anything, even if we were to take these CLs at face value, which we probably shouldn’t.

    Say what? If you publish results showing benefits, then you update your data and find no benefits, your update completely undermines any conclusion you previously wrote saying, “Global warming will have benefits.” That confidence intervals fail to exclude a result does not change the fact there is no longer evidence to support that result.

    I get you can always go back to an argument from ignorance and say, “We don’t know for sure global warming will be bad,” but that’s not how anyone would interpret what is said in the IPCC report. The IPCC report doesn’t even show confidence intervals!

    By the way, I want to stress my issues aren’t just a matter of what one can see in a graph. For instance, Tol made the report say estimates “disagree on the sign” of impacts by relying entirely upon work (of his) from over a decade ago, but he phrased it in a way that made it almost impossible for a reader to notice how outdated that work was.

    The IPCC report is supposed to inform readers of how our state of knowledge has changed since the last one. Tol did the exact opposite, and he did it without any external review. I don’t think the topic he did this on is particularly important, but I think it shows the IPCC process is horribly screwed up.

  4. It’s not clear to me why you refer to Tol 2002 as obsolete. Is there a later version of the same paper that reaches different conclusions by using more up-to-date statistics? The issue at CA involved using old data when more recent data from the same source was available, which told a different story. Your updated graph has the same two dots at +1.0C that the original graph did, one well positive and the other barely negative. I don’t feel comfortable excluding positive impact at +1.0C on the basis of negative impact shown in different estimates from larger increases.

    You do have a good point about the lack of review and the ability to unilaterally tout your own results.

  5. Dale Stephenson, I haven’t complained about using the 2002 point in the graph. My only issue with including it in any graph is I think it’s absurd to use a regression which gives a single outlier like that so much weight, but that’s not an issue here (as the IPCC graph doesn’t display the regression).

    My issue with the use of that data point is in the text because it says:

    Since AR4, four new estimates of the global aggregate impact on human welfare of moderate climate change were published (Bosello et al., 2012; Maddison and Rehdanz, 2011; Roson and van der Mensbrugghe, 2012), including two estimates for warming greater than 3°C. Estimates agree on the size of the impact (small relative to economic growth) but disagree on the sign (Figure 10-1).

    The first sentence refers to estimates made since the last IPCC report. Discussing new results is exactly what the IPCC report is about. It’s supposed to show how our state of knowledge has changed since the last report.

    But the second sentence is incredibly misleading. It begins with the word “[e]stimates” instead of “these estimates” because it doesn’t refer to the new work discussed in the previous sentence. Most people won’t catch that trick. Most people will assume there is disagreement on the sign of impact in the recent science. There isn’t. The only disagreement there is is a decade ago, a little work showed different results.

    Nobody reading that text would realize the only estimates which “disagree on the sign” are two papers from over a decade ago. That is wrong. It is intentionally misleading. If Richard Tol wanted to point out the disagreement between some past estimates and the current estimates, that’s fine. He just needed to inform readers that’s what he was doing. Instead, he intentionally misled readers into believing the current estimates “disagree on the sign” when they don’t.

  6. Brandon, I think you make a good point about the sentences being misleading. If you make the natural assumption that the estimates mentioned in the second sentence are the new estimates of the first sentence, you’ll definitely be fooled. But I disagree that no one reading the text would catch the trick. Looking at the figure 10-1 you reproduced in your post on “Richard Told Inserts his Outdated Conclusions” shows the new estimates as red diamonds instead of blue dots, and they’re all clearly negative. As figure 10-1 plots all the estimates (without the lines shown in this post), one look at the figure would make clear both the source and context for the second and third paragraphs. To be completely fooled, you’d need to read the text and ignore the figure, I find the second and third sentences to be technically true, but misleading in isolation and perhaps they were meant to be.

    I have not heard and do not have the impression that the purpose of the IPCC is simply to discuss “new results”. The relevant quote I get from the IPCC’s site is “the full scientific and technical assessment of climate change.” Do you really think the focus of this section should be on the four new estimates instead of the full body of work?

    Further, I’m still not getting why you’re referring to a “disagreement” between past estimates and the current estimates. Of the four new estimates, all four are estimating based on temperature changes not used by the earlier estimates in 10-1. None of them are anywhere near Tol’s +1.0C. Of the three estimates within shouting difference of an earlier used temperature change, only one is sufficiently different from prior estimates to be considered out-of-line (-11.5% at +3.2C).

    There’s only one estimate that could be considered a disagreement with Tol ’02 based on figure 10-1, and that’s Redhanz and Madison ’05. A look at table 10.B.1 which you reproduced in your earlier estimate shows that the 05 paper (nearly as old as Tol) uses different methodology and different coverage (self-reported happiness). [Ironically, the recent estimate that appears as a huge negative outlier is the other paper based on self-reported happiness.]

    Table 10.B.1 also shows that of the ten Enumeration studies used, Tol ’02 is actually the third most recent. Tol ’95 is actually in the bottom half of +2.5C estimates. You also laid stress on the estimates from Mendelsohn ’02 being calculated by Tol (in your earlier post), but table 10.B.1 shows him calculating the estimates from 8 different papers, including the recent paper that’s the biggest negative outlier.

    I think it’s fair to criticize Tol for post-review changes and the wording of sentences two and three, but I don’t think it’s fair at all to refer to Tol ’02 as outdated, obsolete, or disagreeing with more recent research.

    There’s one other curiosity from Table 10.B.1 I don’t understand, and that’s the parenthetical results under Impact. Do you know what those are from?

    It’s a pity the table doesn’t include the margin of error for the estimates themselves. Given the small impact magnitude, I’d be surprised if the majority of estimates didn’t have 0.0 in their range.

  7. Dale Stephenson, perhaps I should have been more clear. When I said nobody reading the text would catch the trick, I was limiting my remark to people reading the text. I was not meaning to suggest nobody who looked at other things could catch it. It’s obviously possible for people to look at the one figure, notice the new entries are all negative then look in the table and see the age of any of the older values.

    I have not heard and do not have the impression that the purpose of the IPCC is simply to discuss “new results”. The relevant quote I get from the IPCC’s site is “the full scientific and technical assessment of climate change.” Do you really think the focus of this section should be on the four new estimates instead of the full body of work?

    Discussing new results is not the only purpose, but updating the state of knowledge is. The IPCC instructs authors to inform readers of the past state of knowledge then show them what’s changed. That means not merely discussing the four new estimates, but discussing how they affect our previous state of knowledge. That’s why I said the report is “supposed to show hour our state of knowledge had changed since the last report.” You can’t show a change by only discussing the new stuff. You have to compare it to the old.

    Further, I’m still not getting why you’re referring to a “disagreement” between past estimates and the current estimates. Of the four new estimates, all four are estimating based on temperature changes not used by the earlier estimates in 10-1. None of them are anywhere near Tol’s +1.0C. Of the three estimates within shouting difference of an earlier used temperature change, only one is sufficiently different from prior estimates to be considered out-of-line (-11.5% at +3.2C).

    The IPCC report says the estimates disagree on the sign. I pointed out that can only be true if we use old results. I don’t see a point in questioning me for trying to keep the same phrasing as the IPCC.

    You also laid stress on the estimates from Mendelsohn ’02 being calculated by Tol (in your earlier post), but table 10.B.1 shows him calculating the estimates from 8 different papers, including the recent paper that’s the biggest negative outlier.

    Which is something I strenuously object to. When asked to show his calculations for his aggregations, he refused, mocking the people who asked by suggesting only an idiot would need him to show something which is easy to do. The reality is one can aggregate results in different ways, and without knowing what he did, it’s impossible to be sure his results are appropriate.

    I think it’s fair to criticize Tol for post-review changes and the wording of sentences two and three, but I don’t think it’s fair at all to refer to Tol ’02 as outdated, obsolete, or disagreeing with more recent research.

    There is no indication Tol ’02 is representative of anything related to the current state of knowledge. No work since it has agreed with its results. If a decade of work subsequent to is reaches different conclusions, I think calling it outdated and obsolete is fair.

  8. I think the point I’m stumbling over is why you think more recent studies based on a *different* temperature rise somehow make Tol ’02 obsolete and outdated, even if the method and coverage were the same, which they’re not.

    There’s exactly one other study in the figure that estimates impact at +1.0C. It’s marginally more recent than Tol ’02, but it uses a different method and different coverage (its coverage is confined to self-reported happiness). It seems to me like you’re saying that because all but two of the +1.9C or more impact papers show a negative sign, that a positive impact at +1.0C is ruled out. I don’t think that logically follows, especially when all but three of the other estimates are within 2.5% of zero, include Tol ’02 itself. There is no requirement that the curve from the current temperature will be of the same sign throughout. If the climate optimum is higher than today (neither proven nor disproven AFAIK) then global warming would necessarily cause a positive impact to that point, and a negative impact thereafter.

    If Tol 2002 is to be regarded as obsolete and outdated, I need a reason to think *its* conclusions are flawed, not evidence that papers asking a *different* question got a different answer. If anything, the lack of anything directly comparable to Tol ’02 in the last decade suggests to me that whether a mild temperature gain is beneficial or not does not interest the IPCC overmuch.

  9. Dale Stephenson, I think part of the problem is we’ve changed the interpretation of my use of “obsolete” to refer to Tol 2002. In my post, I cleared referred to the later work of his which used Tol 2002. What I called obsolete is the graph in the first image in this post, which I say is obsolete because of the update shown in the second image.

    When you asked me why I call Tol 2002 obsolete, I messed up and accepted the premise of the question. I should have realized the premise itself was wrong, and I hadn’t been calling Tol 2002 obsolete. My references to Tol 2002 were limited to calling it outdated. Referring to one thing as obsolete and another was probably more confusing than I realized. It even led to me being confused (though in my defense, I’m sick and only half-awake).

    Anyway, I think it’s reasonable to call Tol 2002 outdated. As I recall, Tol 2002 is based upon the output of the FUND model, a model which has undergone substantial changes in the last decade. I think it’s fair to call results based upon an old model outdated when multiple new versions of that model have been published since.

  10. Thank you for clarifying that you were referring to the first figure, and not Tol ’02 specifically. To be clear, figure one in your post is not the same as Figure 10-1 in the IPCC report, correct? That just has the dots and lacks the lines and “confidence intervals”, which seem inappropriate for such mismatched data anyway. Is the first figure from Tol 2009?

    I know nothing about Tol 2002 beyond what was said in the tables and figures here, and if the source data has been updated since it was published I think an updated version would certainly be interesting. A quick google search indicates that FUND is Tol’s baby, so he’d be well positioned to opine on the probable impact of the version changes since 2002. I don’t know if he’s ever done so. Tol ’95 appears to use a yet earlier version of the model. I’d be surprised if the two Tol papers are unique at all in this respect. I’m all for updating such studies to use the most recent data whenever possible, though I don’t take it for granted that the most recent studies and versions are necessarily the highest quality. Given the wide variance in models and coverage from Table 10.B.1, I’m completely unconvinced that treating them as independent data points of equal value makes sense.

    Out of curiosity, is there any sort of table or mention in the chapter (or elsewhere) on what the estimated GDP impact has been from the rise in temperature that we’ve already had?

  11. No problem Dale Stephenson. And yeah, the figure I showed should be from Tol 2009. The figure in the IPCC report is the one you saw in the other topic, without the regression line and confidence intervals. Personally, I don’t think the regression is appropriate. Each of the papers used for the figure has its own (generally implicit) curve from which the values used are taken from. I don’t think you can get good results by picking points on curves then regressing across those points in such a uncontrolled manner. There are many better ways to combine different model spaces. For instance, one could generate damage curves for each model then see what sort of overlap there is.

    Beyond that, the type of regression used is inappropriate. When Richard Tol published his update, he tried to spin his results as less alarmist by saying the new curve doesn’t go as far down as the old one. That’s true, but only because the old one gave undue weight to an outlier. I explained how it happens in my response to Han Erren above. If you examine the issue in any detail, it becomes quickly apparent Tol’s regression is highly sensitive to small changes in the data. It’s not remotely robust.

    I know nothing about Tol 2002 beyond what was said in the tables and figures here, and if the source data has been updated since it was published I think an updated version would certainly be interesting. A quick google search indicates that FUND is Tol’s baby, so he’d be well positioned to opine on the probable impact of the version changes since 2002.

    I don’t know about the impact of changes since 2002, but I am aware of some notable changes in impacts since one of their older versions. I think it is for the version change of about 2006. I’d have to check though. Tol doesn’t make anything easy to understand or follow. That’s why he’d probably be the last person I’d trust to opine on such a matter. Just look at what he said when Bob Ward pointed out Tol had written a completely new section:

    In fact, that section was moved from Chapter 19 to Chapter 10. As far as I am aware, Mr Ward did not raise this concern with the IPCC. He was informed no later than 2 April 2014 that the text was moved rather than added.

    That section in question was not moved from one chapter to another. It was completely new. The only thing “moved” was the section name. Not only did Tol completely misrepresent this, he implied Ward was a liar for claiming it was done (the implication is more clear in the full context). Tol has since admitted the section was not merely moved (though he still denies the full extent of the changes). He did so only because I repeatedly drew attention to what he had done. And even after admitting significant changes were made, he has never made any attempt to correct his previous, false statements or innuendo based upon them.

    Out of curiosity, is there any sort of table or mention in the chapter (or elsewhere) on what the estimated GDP impact has been from the rise in temperature that we’ve already had?

    I don’t believe so. I don’t recall ever seeing such. If there is any such mention, I’m either forgetting it or it was somewhere I haven’t looked yet.

  12. I’d love to see damage curves from the papers instead of data points. Lacking that, it’s hard for me to label any particular dot as an “outlier”, given the differing temperature increases evaluated. Well, maybe -11.5%@+3.2, since that’s so far off the range of the +3.0C projections. Given that both Tol data points are based on FUND (different revs) and have a lot of overlap in coverage, I’d guess they’re on roughly the same curve. Maddison is involved with both of the self-reported-happiness, so I’d guess they’re also both roughly on the same curve — a very different one than Tol’s two data points. Maddison’s other paper (household consumption) is one of the least alarming and must be from a different curve. Nordhaus 94b and 96 have the same model and coverage, but I can’t believe they’re derived from the same curve (-1.3%@+3.0C and -1.7%@+2.5C)–if there’s a paper in the table that really could be considered obsolete, Nordhaus 94b seems plausible.

    Do you think the implicit damage curves really could be extracted from any of the papers?

    Pity that no one has tackled the impact of the rise we’ve already had. It seems to me that it’d be a lot easier to quantify changes that have already happened than changes that haven’t happened and may never happen.

  13. I doubt implicit curves could be extracted from many of these papers, but that’s because this subfield is far from mature. I don’t think we have anywhere near the understanding necessary to estimate net impacts from global warming. Pretty much all of these estimates are based upon models which have no known verification. True error bars on an analysis like Richard Tol’s would have to be from floor to ceiling.

    Speaking of which, I suspect that’s why the IPCC report wasn’t going to feature this work of Tol’s prominently (if at all). I think most scientists can recognize it is weak and doesn’t deserve much, if any, attention in an IPCC report.

  14. Brandon:

    That confidence intervals fail to exclude a result does not change the fact there is no longer evidence to support that result.

    Unless you want to call “evidence to support” excluding the contra-hypothesis—which doesn’t happen in the first figure either, that is “no benefit” is not excluded—from a statistical perspective, then “fails to exclude” really isn’t any different than “evidence to support”.

    I actually figure that most of the economists are wrong and that it is more likely than not that we will have net benefits from climate change, up to maybe 1.5°C further warming. My thoughts are they aren’t doing the economic discounting properly for example (how many houses would still be there in fifty years even were the shorelines to not shift inward? If you think 100%, you haven’t looked at housing patterns in the US recently), and underestimating the net positive stimulus effects on the economy of forced increased R&D and investment that would be forced upon us by adaptation.

    As I commented above, even mildly negative impact is of no policy relevance. Partly this is because of the lack of robustness of the results, and partly because it is hard to predict how much of that negative impact can be recouped by a more aggressive mitigation strategy.

  15. I suspect you’re absolutely right about the true error bars being floor to ceiling. But I doubt that’s confined to Tol’s work. Is there any part of the IPCC report quantifying future impacts where the subfield is mature, and the error bars aren’t floor to ceiling?

    Looking up Tol 2009, I see the mysterious parens under impact refers to either the 95% range of the paper or the standard deviation. So foolishly taking the papers at their word and using 2 SDs for a 95% range, these are the papers with that information, ordered by magnitude of increase.

    +1.0C Tol 02 (+0.3% to +4.3%)
    +2.5C Plambeck 96 (-11.4% to -0.5%)
    +2.5C Hope 06 (-2.7% to +0.2%) [I reversed the signs of the printed range, since it’s inconsistent with the -0.9% estimate]
    +3.0C Nordhaus 94b (-30.0% to 0.0%)
    +3.0C Nordhaus 06 (-1.1% to -0.7%)
    +3.0C Nordhaus 06 (-1.3% to -0.9%)

    Hope 06 is interesting because in the IPCC table it’s listed at -0.9% with a range of (-0.2 to +2.7). But in Tol ’09 it’s listed at +0.9% with a range of (-0.2% to +2.7%), and graphed at +0.9 in Tol’s figure one. I suspect the difference in line between Figure One and Figure Two has much more to do with this correction than the additional estimates, with the possible exception of the massive negative outlier at +3.2C. Oddly, the footnote says it is based on the previous estimates by Tol and Fankhauser, presumably Tol 95 (-1.9%) and Fankhauser 95 (-1.4%) As both are already in the table, I’m not sure what the point of including Hope is as well, or why the estimate (even at -0.9%) does not lie between them. Has anyone asked Tol about this?

    The massive range in Nordhaus 94b is because it’s “expert elicitation”, so a compilation of guesses. The first three are all enumeration models, and have a 95% range of 4%, 10.9%, and 2.9%. Of the ten enumeration studies in the IPCC table, I think it’s possible that only two (Plambeck ’96 and Nordhaus ’08) don’t include a positive range at their particular impact, since the others lie between 0.0 (Mendelsohn 00) and -1.9 (Tol 95). The much smaller error bands in Nordhaus 06 is from a statistical study, the three previous statistical studies are between -0.4% and +0.1%, the two at +0.1% and -0.1% I think are very likely to include a positive sign. The one later one is the biggest outlier on the chart (-11.5%@+3.2C) so wouldn’t include it. No idea on what the error margins might be on the three CGE estimates, but since the +1.9C impact has only -0.5% impact, it’s another likely candidate for positive values in the 95% range.

    Bottom line — of twenty impact papers listed, I think as many as eleven may include a positive impact in their range, despite seventeen of twenty being calculated for increases of +2.3C or more. While figure two (the update to figure one) may preclude positive impact, I think it’s fair to say that the papers themselves don’t preclude positive impact, especially for smaller magnitudes than listed.

    Also interesting, at least to me, is the “best off” and “worst off” region listed for ten of the papers. Of the eleven papers, all eleven had negative impacts for the worst off region, and nine of the eleven had positive impacts for the best off region (the exceptions were the two earliest papers).

    Tol 2009 also seems to be teeming with caveats. “The uncertainties about climate change are vast—indeed, so vast that the standard tools of decision making under uncertainty and learning may not be applicable.” “For some economic effects of climate change, we have reasonable estimates; for others, we know at least an order of magnitude. We also have a clear idea of the sensitivities of these estimates to particular assumptions, even though in some cases we do not really know what to assume.” “I believe that there are no more unknown unknowns, or at least no sizeable ones. But my belief here may suffer from overconfidence.” “Perhaps the main disadvantage of the enumerative approach is that the assumptions about adaptation may be unrealistic–as temperatures increase, presumably private- and public-sector reactions would occur in response to both market and nonmarket events.” “Statistical studies run the risk that all differences between places are attributed to climate.” “The majority of studies do not report any estimate of the uncertainty.” “It is quite possible that the estimates are not independent, as there are only a relatively small number of studies, based on similar data, by authors who know each other well.” “Little effort has been put into validating the underlying models against independent data” “The 200-plus estimates of the social cost of carbon are based on nine estimates of the total effect of climate change. The empirical basis for the size of an optimal carbon tax is much smaller than is suggested by the number of estimates.” “The quantity and intensity of the research effort on the economic effects of climate change seems incommensurate with the perceived size of the climate problem, the expected costs of the solution, and the size of the existing research gaps.” “The best available knowledge—which is not very good—is given in Table 2.” This is not an exhaustive list. Tol invokes the fear of a nasty surprise to argue in favor of policy reducing carbon emissions.

    The significance of Figure One is also discussed:
    “The horizontal axis of Figure 1 shows the increase in average global temperature. The vertical index shows the central estimate of welfare impact. The central line shows a best-fit parabolic line from an ordinary least squares regression. Of course, it is something of a stretch to interpret the results of these different studies as if they were a time series of how climate change will affect the economy over time, and so this graph should be interpreted more as an interesting calculation than as hard analysis. But the pattern of modest economic gains due to climate
    change, followed by substantial losses, appears also in the few studies that report impacts over time (Mendelsohn, Morrison, Schlesinger, and Andronova, 2000; Mendelsohn, Schlesinger, and Williams, 2000; Nordhaus and Boyer, 2000; Tol, 2002b; also, compare Figure 19-4 in Smith et al., 2001).”

    The “interesting calculation” isn’t very, but a list of studies with impact over time speaks to the damage curve. I took a quick look at Tol 2002b but the damage curve wasn’t obvious to me in terms of degrees C, the summary impacts in that paper were on a timeline and I think the time->degree conversion probably is in Tol 2002a or elsewhere. However, the aggregate damage for all regions over time was not a smooth curve, nor were the shapes similar for each region. Figure 13 didn’t show the combined aggregation, but my back-of-envelope guess is a short steep spike to capture the warming benefits, followed by a slower steady decline into negative territory. At any rate, the claim is that the general Tol ’02 pattern is the rule, not the exception, through 2009. Do any of the newer studies report impact over time?

    Anyway, hope I’m not boring you. I think the gory details are more interesting than trying to assess the relative merits of Tol’s sins to others. On that score, I still feel the IPCC trick with FAO numbers is worse. To be equivalent, Tol would’ve had to retain Tol ’02 despite the presence of a Tol ’04 that specifically updated Tol ’02 and found opposite conclusions. Or alternately, Tol would’ve had to use the least-squares fit from Tol ’09 for the IPCC report, despite the updated least-squares fit being available. As Figure one did not appear in the IPCC report and was not the source for the misleading sentences (that would be Figure 10-1), I don’t find his misleading-but-true statements any more deceptive than the misleading-but-true comparison of figures (and criticism of Tol ’02) in this blog post–though I certainly think an IPCC author should be much, much more careful with their words than an off-the-cuff blog post!

  16. Dale Stephenson:

    Hope 06 is interesting because in the IPCC table it’s listed at -0.9% with a range of (-0.2 to +2.7). But in Tol ’09 it’s listed at +0.9% with a range of (-0.2% to +2.7%), and graphed at +0.9 in Tol’s figure one. I suspect the difference in line between Figure One and Figure Two has much more to do with this correction than the additional estimates, with the possible exception of the massive negative outlier at +3.2C.

    When Richard Tol published his correction/update, he included an image showing the effect of just correcting data errors. It was small. The new data points had a much larger effect.

    Interestingly, I believe the new data point with the largest impact was not the outlier at ~3C, but the point at ~5.5C. Because of how far out it is on the x-axis, fitting to it forces the curve to be far more linear. That means the quadratic component is greatly weakened. That’s important because a strong quadratic component is necessary to get the previous pattern of benefits followed by losses.

    Unless I’m mistaken, the effect of the outlier at ~3C would actually be to increase the quadratic component, meaning it’d contribute to the perception of net benefits. That is, Tol’s choice of regression would cause an estimate of severe losses to indicate greater net benefits (at a different point). I haven’t run the numbers to check that yet. I know it’s possible given the choice of regression, but it depends on details of other points and how they get weighted.

    Has anyone asked Tol about this?

    I don’t know, but Tol has never explained how he selected the points he selected.

    Bottom line — of twenty impact papers listed, I think as many as eleven may include a positive impact in their range, despite seventeen of twenty being calculated for increases of +2.3C or more. While figure two (the update to figure one) may preclude positive impact, I think it’s fair to say that the papers themselves don’t preclude positive impact, especially for smaller magnitudes than listed.

    I agree. I also think if Tol had used an appropriate method for comparing their results, he’d find positive impacts cannot be excluded.

    Also interesting, at least to me, is the “best off” and “worst off” region listed for ten of the papers. Of the eleven papers, all eleven had negative impacts for the worst off region, and nine of the eleven had positive impacts for the best off region (the exceptions were the two earliest papers).

    Assuming you mean “best off” as countries like the United States, what you describe wouldn’t surprise me. It’s a classic form of bias. There’s a subconscious refusal to accept the possibility of great change. You can’t convince most people the world they’re in will suffer great harm. Most people who believe global warming will cause significant damages believe that damage will happen somewhere else.

    Anyway, hope I’m not boring you. I think the gory details are more interesting than trying to assess the relative merits of Tol’s sins to others…. I don’t find his misleading-but-true statements any more deceptive than the misleading-but-true comparison of figures (and criticism of Tol ’02) in this blog post–though I certainly think an IPCC author should be much, much more careful with their words than an off-the-cuff blog post!

    I don’t care to “assess the relative merits of Tol’s sins to others.” What I’ve found fascinating is the concern people showed for one case and complete apathy they showed for another. It’s difficult to see the difference in reaction as not indicating bias.

    That said, I obviously don’t agree with what you say about this post. I don’t think this post is remotely comparable to the sections Tol added/rewrote.

  17. The “best off” regions don’t include the United States. Most papers chose Eastern Europe and the former Soviet Union as the best off, two went for Western Europe, and one went for South Asia. I doubt it has to do with bias, I think it’s likely having to do with geography. Africa was the most popular region on the “worst off” side, almost certainly because it is poor and already hot.

    I certainly don’t think bias is a likely explanation for AGW-believing papers projecting *positive* benefits in certain regions for changes up to +2.5C. I think the most likely explanation is that’s what their number crunching told them. Doesn’t remotely mean the numbers will be correct, of course. It’s also impact at a certain amount of warming, and the beneficiaries/victims may be different at different points. Looking at the figure in Tol ’02, the former soviet bloc actual dips below Africa at the very end of the timeline, even though it starts with one of the higher benefits (Africa is negative throughout).

    You make an excellent point about the impact of the +5.5C estimate on the fitted line, and it certainly shows how useless the fitted line would be when a huge negative outlier would increase the chance of a positive early contribution, while a mild decrease from a huge temperature increase would decrease the chance of a positive early contribution. However, from Tol’s discussion of the figure in Tol ’09 I quoted above I don’t think the disputed IPCC sentence rests on the fitted line, which isn’t in the IPCC figure anyway. The IPCC sentence reads “Climate change may be beneficial for moderate climate change but turn negative for greater warming”. The paragraph in Tol 09 states this pattern is found in all four of the papers (one of which is Tol 02) which show impact over time. Now all of those studies are, in fact, “old”. Do the studies included in the IPCC which have come out since 2009 show impact over time, and actually refute that conclusion? Let me see what I can find:

    M&R 2011 is the source for the massive -11.5@+3.2C. The abstract says they’re using an optimal base of 65F, so that the more and further you get away from that the less happy you’ll get. So for tropical regions, all increase will be bad because they’re already at or above 65. For more temperate regions, getting warmer in the winter is good and getting warmer in the summer is bad. Get far enough north to be below 65 all the time, and all warming is good. They find major losses for Africa at +3.2C, and modest gains for Northern Europe. Of the ten nations leading in CO2 emissions, they say that only India is hurt. I don’t see any sort of time series, but the current optimal nations are Guatemala, Rwanda, and Columbia. The global mean temperature is around 14C, so the mean is *well* below the optimal temperature (18.3C), but people are concentrated in warmer places on land, so that’s not very meaningful. The worst current nations are listed as Finland, Russia, and Estonia. I would say that this study does not dispute the IPCC sentence.

    Bosello 2012 has -0.5%@+1.9C. It’s CGE, computable general equilibrium modelling, and I don’t really know what that implies. The abstract lists small positive impacts for Northern Europe and similarly small negative impacts for South/Eastern Europe. Inside the text, it finds slightly positive impacts for EU as a whole, for the US, and for China. Figure 1 in the paper is (believe it or not) Figure 1 from Tol 09! Figure 3 does a breakdown of the regional impacts by type, the negative overall finding is driven by adverse agricultural impact in Africa and Asia. There’s no time series, but they do mention “Focusing on from recent research [Note 1], GDP is expected to change in response to climate change from -0.4 percent (Rehdanz and Maddison, 2005) to +2.3 percent (Tol, 2002) for a 1°C warming.” Note 1 instructs the reader to refer to Tol 2010 for a comprehensive review. This certainly does not dispute the sentence, and absolutely gives no suggestion that Tol ’02 is obsolete, outdated, and undone by a decade of later research.

    Roson and van der Mansbrugghe is the final new paper, another CGE providing -1.8%@+2.3C and -4.6%@+4.9C. The abstract claims “climate change impacts are substantial, especially for developing countries and in the long run”, which doesn’t say much about the short-term benefits, if any. The text is paywalled so I can’t go much further on it, though following a link to related research (2010 by vdM) shows a time-related GDP series that appears to decline in aggregate from the beginning, though for individual regions it varies (four regions stay above 0 all the way to 2100). The big negative impact is from labor productivity in ME/Africa and sea level rise in East Asia. Figures 1 and 2 show parabolic curves for agricultural production in the US (positive to about +3.3C) and agricultural production in China (positive to about +3.9C). The peak benefit for agriculture in those two countries was at +1.0C. The steep warming in this paper projects +1.0C (relative to the year 2000) in 2020, which seems unlikely. If I aggregate the worldwide negative impact on GDP by averaging all the regions equally, it looks to be a slight negative impact for +1.0C. If I aggregate by proportion of worldwide GDP, it might be a slight positive impact since USA/Japan/Europe/Russia all are at or above the zero line.

    The story from the recent papers seems to be much the same story that was told in Tol ’09 — some winners, some losers, a possible net benefit in the short term and a likely net loss in the long term. I don’t see that the new papers change the understanding dramatically, despite the dramatic effect on the regression line. And based on the time I’ve wasted looking these papers up and reading them, plus my limited understanding, I think that Tol’s statement that “climate change may be beneficial for moderate climate change but turn negative for greater warming” is a reasonable one.

  18. Dale Stephenson

    I certainly don’t think bias is a likely explanation for AGW-believing papers projecting *positive* benefits in certain regions for changes up to +2.5C. I think the most likely explanation is that’s what their number crunching told them. Doesn’t remotely mean the numbers will be correct, of course.

    I don’t doubt that’s what their calculations show. What I doubt is their calculations were unaffected by biases like what I described. I’m sure there are other biases too. Maybe they cancel out, or overwhelm what I described. I don’t know. What I do know is models like these can be “tuned” in many ways, and it usually happens despite the modelers acting in good faith.

    However, from Tol’s discussion of the figure in Tol ’09 I quoted above I don’t think the disputed IPCC sentence rests on the fitted line, which isn’t in the IPCC figure anyway. The IPCC sentence reads “Climate change may be beneficial for moderate climate change but turn negative for greater warming”.

    The line itself may not be there, but the regression is implicit in the sentence. You can’t do a meta study, see one outlier is positive and say, “The results show the effect may be positive.” The idea of a meta study is to combine the results of other studies. In Tol’s case, he does that with that wonky regression line.

    I do agree the regression isn’t actually shown, and I think that’s a problem. As it stands, the section offers no support for that sentence other than one outlier existing in a meta study. As four the four estimates you discuss, I find this troubling:

    Bosello 2012 has -0.5%@+1.9C. It’s CGE, computable general equilibrium modelling, and I don’t really know what that implies. The abstract lists small positive impacts for Northern Europe and similarly small negative impacts for South/Eastern Europe.

    Despite citing regional estimates listed in the abstract, you fail to cite its most relevant sentence, the one about net global effects:

    Estimates indicate that a temperature increase of 1.92°C compared to pre-industrial levels in 2050 (consistent with the A1B IPCC SRES scenario) could lead to global GDP losses of approximately 0.5% compared to a hypothetical scenario where no climate change is assumed to occur.

    This shows the paper’s total estimate is negative, not positive. You cannot cherry-pick individual regions and portray it as showing results comparable to global estimates. And you certainly cannot say:

    This certainly does not dispute the sentence, and absolutely gives no suggestion that Tol ’02 is obsolete, outdated, and undone by a decade of later research.

    This is a non-sequitur. Provisional results don’t magically stop being provisional if later results are in line with them. Obsolete work doesn’t stop being obsolete if later work gets the same general conclusions. Outdated models don’t stop being outdated if more recent models give the same broad picture.

    Results based upon a computer model which hasn’t been used in over a decade old are outdated. They should be brought up to date. It doesn’t matter whether or not the new results would disagree. Old is still old.

    The story from the recent papers seems to be much the same story that was told in Tol ’09 — some winners, some losers, a possible net benefit in the short term and a likely net loss in the long term…
    I think that Tol’s statement that “climate change may be beneficial for moderate climate change but turn negative for greater warming” is a reasonable one.

    You didn’t say anything which supports the notion there might be net benefits in the short term. You didn’t point to any evidence to support the idea. The only thing anyone has done in defending Tol’s claim in the IPCC report is appeal to ignorance. The exact same approach could be used to justify just saying, “Climate change may be harmful.” That sentence fits the evidence you guys cite just as well.

    Side note, I just realized either Tol 2009 or the IPCC version is wrong. I’ll need a little time to figure out which. It’ll be a bit because I have to get back to yardwork.

  19. Okay, it may take longer to figure out than I had thought. I’m having trouble locating the values Richard Tol attributes to these papers. For instance, Tol lists Nordhaus 1994b as providing an estimate of a 1.3% loss for 3C of warming. That’s not what I see in the paper. Figure 2 of the paper shows estimates for its Scenario A, a temperature rise of 3C, have an average value of 1.9% and median value of 3.6%. I can’t find anything which points to an estimate of a 1.3% loss.

    I’m probably just missing some obvious things.

  20. Brandon, I doubt both that the regression is the source of the “may be positive” statement OR that Tol ’02 is in fact an outlier, and that has been my problem since the beginning. As far as I can tell, your reason for claiming it’s an outlier is because of the positive impact–but the positive impact is a function of the small increase (+1.0C). Claiming that a -0.5% impact happens at +1.92C does *not* contradict a claim that a positive impact may happen at +1.00C, unless the impact is strictly linear with increase. I don’t believe any of these papers claim that the impact is linear. Given that Bosello does not give an impact at +1.0C, and that Bosello *does* refer to the Tol ’02 impact at +1.0C, it is absolutely wrong to claim Bosello contradicts Tol ’02 on that point. There’s two impacts mentioned at +1.0C, and they are the same two we’ve been talking about all along.

    The point of mentioning the positive regional impacts that are common both in the older papers and the new ones is to show that according to this papers, positive impact from temperature rise *is* possible at the regional level. Yes, that doesn’t prove that the net effect is positive at any particular point, but it certainly means that the net effect *would* be positive at any point where the positives outweigh the negatives. Lacking a time series with global net impact you can’t draw conclusions about the impact at +1.0C for the papers that don’t specifically call it, but if they have a bunch of regional time series you could calculate impact at +1.0C if you know the proper way to weight the regional impacts. Which I don’t. If you weight the regions by population you’ll get very different results than if you weight by current GDP.

    In this respect, Tol ’02 presents the same difficulties in calculating impact. The figure of interest (figure 13) is a time series with regional impacts shown, so I don’t know to calculate it to see how Tol ’02 compares to other projections at +1.92C or +2.5C or +3.0C. But just looking at the regional impacts I think it’s clear the Tol ’02 will *also* be a negative projection at some increase in temperature. So I don’t see Tol ’02 contradicting the other papers that claim a negative projection at a different temperature, and vice versa. For the recent papers, they can only contradict Tol’s +1.0C projection if they have a +1.0C projection.

    Seeking to turn the Tol 02b time value into an increase I turned to Tol 2002a. I don’t see the magic key, +1.0C (and 0.2m SLR) is just described as “over the first half of the 21st century”. However, I did find no less than three different values for +1.0C aggregation in that paper. Aggregating the regional impacts as a simple sum (in dollar value) gets +2.3%. This matches the graph, so I assume his calculations for other papers are also done the same way, meaning a small Europe benefit (very common) could well outweigh a large Africa harm (also very common). But it also shows an “average value” of -2.7% (averages world value of non-market goods and services), and a “weighted sum” of +0.4% (attaches higher value to poorer regions). Evidently even generating a world average from a set of regional impacts is a black art.

    There’s also a very interesting comment, speaking of the regional estimates “In all cases, uncertainties are substantial, so that not even the sign of the impact can be known with reasonable confidence. Uncertainties as estimated here really are lower bounds of the ‘true’ uncertainty.”

    You dismiss the “may be positive” as an appeal to ignorance, but it’s important to recognize that Tol makes specific *claims* of ignorance here and in Tol 2009, and none of the papers I’ve read so far disagree with those claims at all. Nor do you in your comments. Saying that a small increase in temperature may be positive isn’t just appealing to ignorance, it’s also recognizing that the field just is not up to the task of eliminating a positive contribution in the first place. The (mostly negative) results for larger increases are nearly all close to zero, and even the wide error margins of a paper like Tol ’02 aren’t nearly wide enough.

    Yes, it’d be nice if Tol 2002 were brought up to date with more recent models and projections (as opposed to more recent facts, which don’t seem to be an input to these papers). But the central difference between Tol 2002 and the FAO numbers is that the more recent FAO numbers *exist*. The most recent version of Tol ’02 is Tol ’02. Lacking a more recent Enumeration model with similar coverage that estimates impact at +1.0C, it is the state of the art, such as it is. Is it old? Yes. Is there reason to believe if updated it would reach different results? Not that has been shown. Is there reason to believe that the other enumeration reports, most of which are even older, would all reach different conclusions at +1.0C? Not that has been shown. You’re rejecting Tol as an outlier because of its positive sign while ignoring the fact that all the similar models (and all but one of the dissimilar models) are evaluating an increase at least 90% larger and usually 250% larger.

    Bosello ’12 referred to the two +1.0C estimates as “recent research”. Evidently old is in the eye of the beholder. As Tol ’02 has slightly more papers in Figure 8-1 that are older than it than newer, I don’t see discarding the “old” papers as a useful idea. There aren’t nearly enough papers as it is.

  21. Dale Stephenson, you’re continuing to talk about contradiction when I’m talking about a failure to support:

    Claiming that a -0.5% impact happens at +1.92C does *not* contradict a claim that a positive impact may happen at +1.00C, unless the impact is strictly linear with increase.

    I don’t believe those two contradict one another. I do, however, believe they fail to support one another. I also believe when you have ~20 points, if one point is not supported by any of the others, it cannot be trusted as giving a correct estimate.

    Also, I’ll note you haven’t complained about the IPCC report contrasting estimates in the same way. If it’s okay with you for the IPCC to say estimates disagree on the sign of impact despite those estimates being for different amounts of warming, I don’t see how you can take issue with me pointing out those estimates disagree.

    Lacking a time series with global net impact you can’t draw conclusions about the impact at +1.0C for the papers that don’t specifically call it, but if they have a bunch of regional time series you could calculate impact at +1.0C if you know the proper way to weight the regional impacts. Which I don’t.

    Which means you cannot say the later estimates lend credence to Tol 2002’s results. All you can say is the estimates neither agree nor disagree.

    You dismiss the “may be positive” as an appeal to ignorance, but it’s important to recognize that Tol makes specific *claims* of ignorance here and in Tol 2009, and none of the papers I’ve read so far disagree with those claims at all. Nor do you in your comments. Saying that a small increase in temperature may be positive isn’t just appealing to ignorance, it’s also recognizing that the field just is not up to the task of eliminating a positive contribution in the first place.

    No it is not. I’m all for making uncertainties clear. This text does nothing of the sort:

    Climate change may be beneficial for moderate climate change but turn negative for greater warming.

    Saying something “may be beneficial” implies the odds are in favor of it being beneficial. Saying things may “turn negative for greater warming” implies the odds are in favor of them turning negative. When you only state one possible scenario, readers will interpret that scenario as the most likely.

    There is only one estimate which supports this scenario, and yet, it is the only scenario described. That’s wrong. It’s misleading.

    Yes, it’d be nice if Tol 2002 were brought up to date with more recent models and projections (as opposed to more recent facts, which don’t seem to be an input to these papers). But the central difference between Tol 2002 and the FAO numbers is that the more recent FAO numbers *exist*.

    Tol 2002’s results are based on the FUND model, which has been updated numerous times since that paper was written. More recent numbers *exist*. Tol just chooses not to use them.

    I don’t see discarding the “old” papers as a useful idea. There aren’t nearly enough papers as it is.

    I haven’t suggested discarding old papers. The point I’ve been making is the IPCC report should inform readers of how results in the field have changed over time. Aside from that, the only discarding I’ve suggested has been in regards to Tol 2002. For that, I say results from a 2002 version of a model should be replaced with results from a newer version of the model when available.

    I don’t see what you find unreasonable about either of those two notions.

  22. Brandon, your link to Nordhaus 94b isn’t working for me. I modified the link and got a paper, but it appears to be the expert survey paper from the content, not an enumeration paper. Are you sure you’re looking at 94b and not 94a?

  23. Oops. I linked to the right paper and got the values from it right, but I forgot the two papers are listed out of order in Tol’s table when I went to compare the values to the table. I should have said 4.8% instead of 1.3% in my comment.

    Speaking of which, I think I figured out the problem for that paper. It looks like Tol took the values from Figure 3, not the values from Figure 2. Figure 2 shows estimates for the topic he was looking at (% loss of GDP), not Figure 3. Figure 3 shows an entirely different type of estimate (% chance of a 25%+ drop in GDP). I don’t know how Tol could have mixed the two up.

    I also don’t look forward to trying to find the other values he lists. Who knows where he might pull random values from?

  24. I don’t think there’s anything unreasonable in updating Tol 2002. By the same token, I don’t think there’s anything unreasonable in updating the other papers, many of which are even older. But I also think there’s no reason to discard Tol ’02 for not being updated when none of the other papers have been either, or discard Tol ’02 for being positive when so many larger-increase studies are near zero, or even to suspect that an up-to-date Tol ’02 would draw materially different conclusions. That the model used in the analysis has been updated only shows that more recent numbers *could* exist, not that they do exist. I noticed the source for the model was available, but I don’t think I could use it to update Tol ’02 at all. I doubt Tol himself could update it quickly, though maybe I’m wrong.

    Given that we have only two data points for +1.0 C (at least from the table), the two scenarios would seem to be positive-turns-negative and negative-turns-more-negative. I don’t see that the later estimates make the second scenario more likely than the first. Given the paucity of data would it be better to not mention the possibility that a positive impact exists at any increase? Not convinced of that. More sentences would definitely be good. The problem with Tol’s writing IMO lies not in what was said, but what wasn’t said.

    Your point about signs (IPCC) versus differences (you) is what I was trying to get at with my misleading-but-true comment. I don’t want to belabor it because I don’t want to be rude. You point out — correctly — that noting that the signs differ obscure the truth that the vast majority of papers have a negative sign. I see pointing out the differences between Tol ’02 and other estimates as the same sort of problem — it’s true but misleading, because different questions are being answered. (Tol ’02 at +2.5C or +3.0C may well find itself among the negative estimates as well, and from the time series shown appears to me to steadily decline after +1.0C.) The problem with comparing sign and impact is that the data points aren’t actually comparable. Further, from the description not just of magnitude but coverage and model, it seems there are very few comparable studies at all. It’s not *just* Tol ’02 that is “unsupported” by its fellow papers, it’s practically every paper that’s unsupported from its fellow papers. Given the differences between the papers, I think it’s actually rather remarkable that practically all of them are under 5.0% impact.

    Having said that, and having read a few of the papers, I don’t regard any of them as being *actually* reliable. The statements I find most likely to be true in both Tol ’02 and Tol ’09 are the numerous caveats. I think this area of research deserves all the weasel words it can get. But again, that’s not at all unusual in the climate field, is it?

  25. I see what you mean with Figures 2 and 3, and agree with your assessment. Sloppy work, and it took the median instead of the mean as well. The actual (surveyed) impact should be -1.9%@+3.0C, with a range of 0 to -21%.

  26. Dale Stephenson:

    That the model used in the analysis has been updated only shows that more recent numbers *could* exist, not that they do exist. I noticed the source for the model was available, but I don’t think I could use it to update Tol ’02 at all. I doubt Tol himself could update it quickly, though maybe I’m wrong.

    I’ve looked at (an older model of) FUND’s code. It’s not pleasant. Getting it to run is a chore. That said, it can be done, and global impact results can be calculated with it. In fact, I’m pretty sure they have been. If not, I know regional impact results have been.

    Given that we have only two data points for +1.0 C (at least from the table), the two scenarios would seem to be positive-turns-negative and negative-turns-more-negative. I don’t see that the later estimates make the second scenario more likely than the first. Given the paucity of data would it be better to not mention the possibility that a positive impact exists at any increase? Not convinced of that. More sentences would definitely be good. The problem with Tol’s writing IMO lies not in what was said, but what wasn’t said.

    The later estimates actually do make the second scenario more likely. A positve-turns-negative scenario involves more change than a negative-turns-more-negative scenario. Greater changes require greater forces, meaning it’s harder for them to happen. That means they are less probable.

    That said, I am fine with mentioning benefits as a possibility. What I’m not fine with is mentioning them as the only possibility.

    Your point about signs (IPCC) versus differences (you) is what I was trying to get at with my misleading-but-true comment. I don’t want to belabor it because I don’t want to be rude. You point out — correctly — that noting that the signs differ obscure the truth that the vast majority of papers have a negative sign. I see pointing out the differences between Tol ’02 and other estimates as the same sort of problem — it’s true but misleading, because different questions are being answered.

    Ah. I don’t mind that criticism then. I structured my comparison the way I did because I was emulating the comparison the IPCC made. I figure if Richard Tol makes a comparison, I can use the same comparison to criticize him, even if that comparison is less than ideal.

    I figure if something is good enough for my target when it works in his favor, it’s good enough for him when it works against him. And if people don’t like the comparison I use, they can blame it on the guy I’m emulating.

    I see what you mean with Figures 2 and 3, and agree with your assessment. Sloppy work, and it took the median instead of the mean as well. The actual (surveyed) impact should be -1.9%@+3.0C, with a range of 0 to -21%.

    There were two other papers I couldn’t find his listed value in. I wonder if I should go back and look everywhere in the text for them.

    Nah. There’s a much larger problem in all this which I stumbled upon a little while ago. I should work on finishing the post I’m writing for it. I have most of it finished. I just need to go find some quotes.

    I’m pointing this out because of two things: 1) You are part of why I stumbled upon the issue; 2) The issue makes the word “obsolete” funny.

  27. Found a free copy of one of the Mendelsohn 2000 papers (Country-Specific Market Impacts of Climate Change) online. Calculating the impact of +1.0C from the response functions in it is beyond my ability to do quickly, so I can neither confirm or deny the claim that the optimal temperature rise from the study is between 0.0 and +2.0C. However, given that the quantified impact of the Ricardian model is +0.04%@+2.0C and the reduced-form model is only -0.3%@+2.0C, I think it is extremely likely that there’s a net positive between 0 and +2.0C. The paper contains this quote:

    Most of the response functions imply that the net productivity of sensitive economic sectors is a quadratic function of temperature (Mendelsohn et al. 1997). Ceterus paribus, starting from cool temperatures, each economic activity increases in value as temperature increases to some maximum value and then decreases as temperature increases beyond that point. This is consistent with what we know about global economic productivity, where the most profitable sites for most climate-sensitive activities lie in the temperate or subtropical zones.

  28. Found a free copy of the other Mendelsohn 2000 paper “Comparing impact across climate models”. Couldn’t find a time series to derive a +1.0C estimate, though it’s slightly positive at +2.0C. But the text explicitly agrees with the positive-to-negative scenario and suggests a crossover shortly after +2.0C:

    The coefficient on temperature change is negative and significant in all the models. Although the net impacts of climate change are beneficial relative to an unchanged state, the models imply that higher temperatures are harmful. There are two explanations of this result. First, the climate-response function for temperature is hill-shaped, not linear. Starting from a cool climate, warming is beneficial at first. However, as warming continues, more countries exceed the optimum and warming becomes increasingly harmful. By 2100, all the GCMs predict that the unweighted global temperature change will exceed 2 C at which point further warming is harmful. Second, changes in precipitation and carbon
    dioxide are beneficial. Thus the overall net impact of the changes is beneficial, even though the marginal effect of additional temperature is harmful by 2100.

  29. Looking at a web draft of Nordhaus and Boyer 2000, the global damage function (figure 4-3) looks to be starting negative, at +1.0C I eyeball it at about -0.5%. About half of the regional damage functions (figure 4-4) follow the positive-then-negative path.

  30. Dale Stephenson, you may be interested in my newest post. You asked if any studies had estimated the net economic impacts of global warming thus far. As it turns out, Richard Tol may have done just that. His graph in the 2009 paper said the estimates were “relative to today,” the IPCC report didn’t give a baseline, and his correction said those estimates are “relative to preindustrial times.”

    Assuming his correction is actually correct, all of the warming we’ve experienced thus far would be accounted for in his graphs. We’d be nearly one degree into them. That means, even according to his original graph, we wouldn’t really expect to see net benefits from global warming in the future.

    Of course, his correction may not be correct. It could be the various estimates are relative to different time periods. If so, his conclusions are meaningless as they’re based upon a data set which has an unknown number of misplaced values.

  31. I don’t believe the correction is correct. More sloppy work, but I think all the papers I read were talking about future warming, not evaluating past warming. I would still be interested in a paper that *did* evaluate past warming’s net benefits/costs, which it seems would be easier to evaluate than future impact, even if the error margins would still be floor to ceiling.

  32. Yes, I saw that and responded. Missed it completely when I read it. As I pointed out in the other thread, that means we can’t position it on the same chart with papers that figure from now, because we have no way of knowing what proportion of the harm was from now.

    Did you find any others that claimed an increase from pre-industrial times? I’d bet the cluster of +2.5C impacts are all operating from the same (modern) baseline.

  33. I think you’d be wrong. I looked at a few of them and saw:

    There are two Mendelsohn 2000 papers, but Tol only lists one of them in his reference section. It appears to be baselined at ~1994. This baseline is used to project an economic component to 2060 for its calculations. It is assumed a particular amount of economic growth will coincide with a certain amount of warming. The other Mendelsohn 2000 paper uses the same approach for its baseline but with different values.

    An additional oddity is both of the Mendelsohn 2000 papers use two models. That means each one gives two estimates, for a total of four. Tol listed Mendelsohn 2000 twice, but only included one in his reference section and only gave two estimated values. It is impossible to tell which values he actually used, especially since all of the values were “aggregated” by Tol via calculations he refuses to disclose.

    Maddison 2003 appears to be baselined at 1980.

    Hope 2006 uses a strange baseline where it calculates damages for areas based off temperatures exceeding particular values. It’s not clear to me this can be aggregated into a global result as each area’s tolerable temperature is different. It wouldn’t make sense to assume all areas exceeded their tolerable temperature range at the same point.

    Fankhauser 1995 and Tol 1995 are both baselined to 1990. The former is calculated for United States only, then extrapolated to the rest of the globe. The same is true for the latter, only it includes Canada in its calculations.

    None of the temperature paths used for the papers I looked at are the same. I don’t see how one can claim they can be directly compared like Tol did.

  34. How can you directly compare when the temperature path is different? The same way you compare when the coverage is different and the model is different — very very loosely. What a mess. When I said before that the other models weren’t comparable enough to Tol 2002 to contradict it, I didn’t know how right I was. Even the same baseline and same magnitude aren’t comparable unless the temperature path is identical, because of the change on GDP.

  35. “Comparing impact across climate models” by Mendelsohn, Schlesinger, and Williams has got to be the Mendelsohn et al enumeration model. As I quoted above, the running text discussion claimed a net benefit through 2100, with a fall off after that (for temperature impacts divorced from CO2 and precipitation, the crossover was around +2.0C). I’m not sure what sort of number crunching would be necessary for this one, Table 7 lists total market impacts for all models from three different points of view, and given the small numbers involved I think any of them would round to 0 compared to the global GDP in 2100. We know the Tol 2002 figure was taken from the “simple sum” calculation, and my guess would be that Mendelsohn’s “Average GCM” is the equivalent. The total numbers there are +59B for the experimental model and +146B for the cross-sectional model (in 1990 dollars). Per wikipedia, the GWP for 2012 was 45,730B in 1990 dollars, up from 27,539B in 1990.

  36. “Country Specific Impacts of Market change” by Mendelsohn, Morrison, Schliesinger, and Andronova has got to be the Mendelsohn et al statistical model. I’m not sure why any aggregation by Tol would be necessary, since Tables 3 and 4 indicate a -0.3% impact from the reduced-form model and a +0.04% impact from the Ricardian model. As Tol lists at +0.1% the Ricardian model must be his source, but simple rounding would go to +0.0 and not +0.1. The results section indicates that this is from +2.0C at 2060, with a global economy of $95T compared to $21T “today”. Those numbers seem low.

    The results section also had an interesting caveat about the reduced-form model (which wasn’t used in Tol’s table)

    The reduced-form model predicts that agriculture will virtually disappear from most of Africa and Southern Asia, thereby causing large losses in agriculturally dependent countries. This relative damage is large in 2060, even with the expected economic development in these countries. However, given that the reduced-form model predicts that agriculture would not exist in these countries given today’s climate, one must be cautious about giving these results too much credence.

    I think that final sentence has a gift for understatement.

  37. OK, I missed the point that the table has values from +2.5C while the paper has them at +2.0C. That would require some additional calculation.

  38. Found another +1.0C projection, though only regional. Oddly enough, it’s from Tol 95 but found in Tol 02a. In table VII it lists the regional impacts for +1.0C, but in parens it gives the same results from Tol ’95. As we know, Tol ’95 was one of the most pessimistic at +2.5C, and even its “best off” region was negative. The parenthetical regional impacts at +1.0C listed in Tol ’02 range from +0.6 to +5.0, so it certainly would also produce a positive projection at +1.0C. It was only below Tol ’02 in the OECD-A and OECD-E regions, but as those two regions have a disproportionate share of the world GDP I don’t know if Tol ’95 was above or below Tol ’02 at +1.0C.

  39. HaroldW:

    Well, obviously I can’t answer for Richard Tol. However, I looked at the Nordhaus 94 paper (expert elicitation) which you mentioned earlier, and it is not clear about its temperature baseline. E.g. scenario A is described as a 3 K rise by 2090. I interpret this as being 3K from the then-current temperature, for two reasons. First, that’s the “natural” interpretation (in my opinion) in the absence of an explicit baseline. Second, scenario A is described as “in the middle range of the projections made by the IPCC.” Only the first AR was extant in 1994, and its projection for “business as usual” emissions and mid-range sensitivity (ECS=2.5K if I recall correctly) was around 0.3 K/decade, or 3 K/century.

    I mostly agree with this interpretation. The primary problem is even with two endpoints fit, each respondent will still have to create a temperature path of their own when considering the problem. The secondary problem is even if one interpretation is more natural, we can’t know every respondent used it. It’s always possible a couple people read it differently.

    Really though, I don’t see why guesses, even from supposed experts, should be used the same way as evidence from two decades later.

    So for this data point, one can infer the baseline, and it’s not pre-industrial. [As described in the captions to Tol(2009) corrected & updated figures 1&2.] But I wouldn’t be at all surprised if other studies were more ambiguous in this regard, making a definite answer impossible.

    I know one of the estimates is definitely relative to pre-industrial times. I don’t think any of the others are, but I haven’t checked them all. Of the ones I have checked though, there seems to be a range of ~30 years where baselines may be drawn.

    And of course, this isn’t just about what year estimates are relative to. A number of these estimates (but not all) depend upon how quickly temperatures change. Even if two estimates for 2.5C were relative to 2000, one might give damages for for 2050 and the other for 2100. And there are a number of other confounding factors too.

    And yes, I don’t know how one can put all the data points on a single graph if their baselines are not reconciled or unknown. But frankly, I think the whole exercise, viewed quantitatively, stinks. If one views the answers as qualitative WAGs, OK. But the values are being treated in the same manner as noisy measurements, and they don’t seem to me (a) to be estimating the same property, and (b) to have any obvious or demonstrable distribution relative to some “true” value.

    Indeed. And remember, this is being given a meaningful amount of attention in the IPCC report. The Summary for Policy Makers uses it for a conclusion on the topic of economic losses, and it’s getting a figure, table and section in Richard Tol’s chapter. It’s also being used to get Tol a lot of media attention.

    It’s like Michael Mann’s hockey stick again, only on a smaller scale: tons of data errors, a refusal to share calculations, a fundamentally nonsensical methodology, undue weight in an IPCC report, undue media attention and limited critical analysis from other scientists. Oh, and lets not forget an arrogant and obnoxious personality to go with the paper.

  40. Brandon, you mentioned “evidence from two decades later”. Where is that? I could really go for some actual evidence on impacts. At a minimum, surely someone could make an educated guess on the GDP impact on agriculture and SLR (the two big negative impacts in some of these reports) from the ~0.85C rise we’ve experienced already.

  41. I think “pre-industrial” for Bosello must be 1950, not 1850. It states its estimates is on “a temperature increase of 1.92°C compared to pre-industrial levels in 2050 (consistent with the A1B IPCC SRES scenario)”. Looking at ar4 wg1 figure SPM.5. and following the A1B line, it looks to me like about +1.5C, compared to a late twentieth century 0 point. I don’t think the math works for ~0.85C of the increase being already realized.

    I also don’t think Bosello is actually running their model from pre-industrial times. They only mention following A1B as far as I can see. Do you agree?

  42. Dale Stephenson:

    Brandon, you mentioned “evidence from two decades later”. Where is that? I could really go for some actual evidence on impacts.

    I was referring to the estimates Richard Tol shows. I get some people would argue they’re not “evidence,” but that’s a semantic issue right now. The point I was making is just that it is weird to give the same weight to opinions from two decades ago as you would a computer model of last year. Doing so either implies those opinions were incredibly important, or the model has almost no value.

    (Of course, the regression Tol used means the estimates aren’t given the same weight. That’s not an intentional design to account for this effect. It’s just another source of distortion.)

    I think “pre-industrial” for Bosello must be 1950, not 1850. It states its estimates is on “a temperature increase of 1.92°C compared to pre-industrial levels in 2050 (consistent with the A1B IPCC SRES scenario)”. Looking at ar4 wg1 figure SPM.5. and following the A1B line, it looks to me like about +1.5C, compared to a late twentieth century 0 point. I don’t think the math works for ~0.85C of the increase being already realized.

    I’m not sure why you think “pre-industrial” must be 1950 or 1850. I believe it’s normally defined as pre-1880. Regardless of the precise year used though, there is no way anyone would call something relative to 1950 relative to pre-industrial times. There was tons of industry in the 1940s.

    I also don’t think Bosello is actually running their model from pre-industrial times. They only mention following A1B as far as I can see. Do you agree?

    No. I think the problem is you’ve misunderstood what the A1B IPCC SRES scenario is. SRES stands for Special Report: Emissions Scenarios. They’re scenarios used to represent possible emission rates in the future. Being consistent with such a scenario does not require any particular temperature path. The graph you looked at is for GCMs run under the A1B scenario. That is, you looked at the average value of a group of models run under a scenario.

    Bosello et al could easily use the same A1B emissions scenario but have a different temperature path. Tons of people have. Heck, none of the GCMs used to make that figure actually have the same path as the figure. It’s likely some of them would show as much divergence as Bosello et al show.

  43. Let me see if I can make my posts clearer with some formatting, because I’m obviously not doing a good job of communicating.

    I’m not sure why you think “pre-industrial” must be 1950 or 1850. I believe it’s normally defined as pre-1880. Regardless of the precise year used though, there is no way anyone would call something relative to 1950 relative to pre-industrial times. There was tons of industry in the 1940s.

    I don’t think “pre-industrial” must mean anything, but I’m trying to figure out what it means [b]to Bosello[/b]. In your [i]Missing the Obvious[/i] post you refer to 0.85C already being realized and link to page three of the WG1 AR5 SPM. The very first sentence on page three talks about temperature rise since the decade of 1850 and refers to figure SPM.1, and eyeballing the SPM I thought ~0.85 was in the vicinity. I stopped there, which was certainly a mistake on my part. If I’d done as I keep advocating and actually read all the text instead of just looking at the pretty picture, I would’ve noticed that the second paragraph explicitly used 0.85C as the gain from a linear trend 1880-2012.

    [i]However[/i], it should be noted that page 3 does [b]not[/b] describe 1880 as being the “pre-industrial” reference. Here is the paragraph in full:

    The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C3, over the period 1880 to 2012, when multiple independently produced datasets exist. The total increase between the average of the 1850–1900 period and the 2003–2012 period is 0.78 [0.72 to 0.85] °C, based on the single longest dataset available4 (see Figure SPM.1).

    I don’t see pre-industrial defined in the SPM, nor do I see it used in section B1. So what does it mean to Bosello? Pre-industrial is just mentioned once:

    This study is aimed at assessing the economic consequence of climate change in the first half of the century deriving from a wide set of climate change impacts. The reference climatic scenario is the A1B IPCC SRES scenario implying a 1.92°C increase in 2050 compared to the pre-industrial level.

    This tells us the time period studied (first half of the century), the end magnitude (+1.92C), the end year (2050), and the climatic scenario (A1B IPCC SRES).

    It logically follows that the start year could be found by finding what A1B IPCC SRES suggests the rise is in 2050, subtracting 1.92C, and seeing what year that would be.

    In the AR4 WG1 SPM I found figure SPM 5, which displays multi-model averages for different scenarios, together with 20th century temperatures, a line being drawn between the real and projected at 2000. This is consistent with Bosello’s description. 0 on this graph is 1980-1999 average. The A1B line (green) appears to cross 2050 somewhere between +1.4 to +1.6. So the start year must be somewhere between -0.32 and -0.12. 1850 isn’t on the graph, but by comparison to 2000 I think it would be at around -0.6, clearly way too low for +1.92. I knew that 1950 is considered a magic year where anthropogenic influences start to matter, so I eyeballed 1950 on the chart and it’s about -0.4. Still too low, but at least it’s in the ballpark. Seemed plausible to me, though not certain.

    However, what about trend-1880? Historical 1880 looks about +0.1C higher than 1850, so it’s also too low. But trend 1880 is not on the figure, and the figures are a poor substitute for the actual numbers, anyway. Could it be trend-1880, your source for +0.85C? If it’s in the appropriate range, certainly it could.

    But — and this is an important point — is the start date for the magnitude the same as the start date of the model? According to the same paragraph that mentioned pre-industrial times, the goal is to assess the economic consequence of climate change in the “first half of the century”, and the scenario it’s using to go to 2050 starts in 2000. The CGE used for the study is ICES, and note 3 says of ICES that the calibration year is 2001 and the simulation time is 2001-2050. So the simulation time covers a period of time where the reference scenario (A1B) increases somewhere between +1.2 to +1.4C. Unless there’s some pre-2001 simulation not described in the paper, it is not actually simulating a +1.82C increase, it is simulating the effect of [b]moving[/b] to a +1.82C differential by following the A1B path.

    In other words, this paper is not answering the question of “what happens if we go up +1.82C from now”, and instead is answering the question of “what happens if we follow A1B’s temperature path from 2001-2050 instead of not changing at all in that span.”

    The significance for Figure 10-1 is that it is misplotted on the chart. It’s probably not the only one, though as the only one identified so far using a “pre-industrial” reference point it’s probably the most affected in magnitude.

  44. Messed up the formatting on the last post, sorry. Now you make a good point that Bosello following A1B emissions does not mean its temperature path followed the multi-model mean. In the absence of any information clarifying that point in Bosello I think using the multi-model mean is a reasonable assumption. If you’re right about trend-1880 being the preindustrial start year, what is trend-2000? That would tell us what the actual magnitude simulated would be.

  45. Brandon, when questioned on evidence you write:

    I was referring to the estimates Richard Tol shows. I get some people would argue they’re not “evidence,” but that’s a semantic issue right now. The point I was making is just that it is weird to give the same weight to opinions from two decades ago as you would a computer model of last year. Doing so either implies those opinions were incredibly important, or the model has almost no value.

    In all likelihood, last year’s model does have almost no value — and that goes for all the other estimates too. They vary in approach and coverage, but from the only expert elicitation estimate to the recent CGE estimates, there’s assumptions piled on assumptions and inputs derived from models that (to my knowledge) have never demonstrated skill. It’s not just weird to give the “same weight”, it’s weird to try to weight them at all. That was the central problem with the regression line in Tol 2009’s figure 1 (either original or updated). We certainly do not know enough to know if the recent CGE papers will be better predictors than the enumerative or statistical models, or even the ancient “expert” guesses from a few decades ago.

    However, there’s one thing we could say, and that’s that the recent model-based papers tell much the same story that the enumerative ones do — the impact is small relative to GDP. The only point I see in bothering with all these flawed and practically useless estimates in the first place, is that they’re the only thing we’ve got answering a vital policy question — how much is it going to matter. And while the estimates are widely varied in method, coverage, assumptions, age, and for small magnitudes even sign, there is a broad consensus that it’s not going to matter very much. The impact relative to GDP is small.

    Now just because there is a broad consensus doesn’t mean the consensus is right, of course. Personally, I suspect think that the minor harm being projected is still an overestimate and disagree with Tol ’09s conclusion that this mess of estimates means “There is a strong case for near-term action on climate change.” Time will tell, or not tell if the world continues to fail to warm according to projections.

  46. Dale Stephenson:

    In other words, this paper is not answering the question of “what happens if we go up +1.82C from now”, and instead is answering the question of “what happens if we follow A1B’s temperature path from 2001-2050 instead of not changing at all in that span.”

    A1B does not have a temperature path. It has an infinite number of temperature paths that could be realized. The only way Bosello et al could do what you say is if they specify which model (or combination of models) it is using to pick a temperature path for the A1B scenario.

    Given I don’t have access to the paper itself, I can’t say for sure, but the explanation I’m suggesting is very simple. Bosello et al could have done what pretty much every other paper cited in Tol’s work did: define a temperature path of their own. If so, their reference to the A1B scenario would just be to say what their chosen temperature path is consistent with.

    I don’t see any reason to look at the IPCC AR4’s multi-model mean when such a simple interpretation is available.

    In all likelihood, last year’s model does have almost no value — and that goes for all the other estimates too.

    If opinions from 20 years ago are considered of equal value to model outputs of yesteryear, then it stands to reason it took 20 years of development for those model outputs to “catch up” in value.

    Building upon this, it is reasonable to assume opinions have evolved in that same time period. As such, we can assume a survey now would be of far greater value than any of those model outputs. Or at least, that’s a logical outcome of the approach Richard Tol used.

    The problem I was highlighting with Tol’s approach isn’t that he weighted everything equally. It’s that in weighting everything equally, he implicitly argued past approaches have more value than new ones. That’s the only way past estimates, which used less evidence, could have equal weight.

    However, there’s one thing we could say, and that’s that the recent model-based papers tell much the same story that the enumerative ones do — the impact is small relative to GDP. The only point I see in bothering with all these flawed and practically useless estimates in the first place, is that they’re the only thing we’ve got answering a vital policy question — how much is it going to matter.

    The lack of good evidence does not justify using bad evidence. I’m fine with people wasting their time doing shoddy work. That happens all the time in science. What I’m not fine with is people getting fame and accreditation for doing shoddy work. That’s especially true when they sneak their shoddy work into the IPCC report solely to promote themselves and their personal views.

    If Richard Tol’s work had gone another way and showed global warming was a serious threat, this conversation (in general, not with you) would be very different. Skeptics would be up in arms about what he did. They’d mock his work and hurl invectives at him for slipping it into the IPCC report after the external reviews had been completed.

    Instead, practically nobody will just say, “What Richard Tol did was wrong.”

  47. Brandon, I was able to find a free copy of Bosello 2012 online. I don’t have the address in my history, unfortunately. The exact phraseology used is: “The reference climatic scenario is the A1B IPCC SRES scenario implying a 1.92°C increase in 2050 compared to the pre-industrial level.” I see no other clues.

  48. You write:

    The problem I was highlighting with Tol’s approach isn’t that he weighted everything equally. It’s that in weighting everything equally, he implicitly argued past approaches have more value than new ones. That’s the only way past estimates, which used less evidence, could have equal weight.

    We don’t know that past estimates used “less evidence”. The papers differ in model and coverage as well as age. Even in the few cases where the model and coverage match, the assumptions and methods can be very different. All things being equal, you’d hope the more recent paper would be “better”. But in this field, all things never seem to be equal.

    Bosello 2012 calculates impacts to 2050 (+1.92C from “pre-industrial”), simulating 2001 through 2050. It estimates -0.5%. Tol 2002 is drawn from 2000-2200, but the +1.0C impacts are at 2050 (+2.4%). Time of change is almost exactly the same, magnitude of change is (probably) different. Model is different. Bosello does have availablity to a decade worth of work that Tol ’02 doesn’t have, but I suspect that advantage has a small impact compared to the assumptions made in the estimate design.

    Because comparing the papers, it’s easy to see where the difference lies. Most of Bosello’s small damage is concentrated in agriculture. Most of Tol’s benefit is concentrated in agriculture. I think whatever technical improvements have occurred, if any, are going to be small next to that gaping dissimilarity.

    And I don’t think the question of whether a mild temperature rise will be harmful or beneficial to agriculture has been settled in the last ten years. Certainly the deleted IPCC figure 19-8 shows a continued divergence of opinion at +2.5 from “pre-industrial”. From what I’ve read, the chief cause for variance is assumptions about the presence and speed of adaptation.

    If Richard Tol’s work had gone another way and showed global warming was a serious threat, this conversation (in general, not with you) would be very different. Skeptics would be up in arms about what he did. They’d mock his work and hurl invectives at him for slipping it into the IPCC report after the external reviews had been completed.

    Certainly true. And meanwhile, the reaction of alarmists would also be very different to Tol. That’s just human nature. It’s all not sensitive to the underlying quality of the work being “slipped in”.

    Instead, practically nobody will just say, “What Richard Tol did was wrong.”

    OK, I’ll say it. What he did was wrong. I like the figures that were deleted. I like paragraphs of caveats. I don’t like making big textual changes after review regardless of quality. I think Figure 10-1 (with table) is interesting enough to add and discuss, but not nearly interesting enough to serve as a complete replacement. And the placement of the papers on the graph certainly looks like it could stand some additional quality control. I also think he should reveal how he calculated the figures for papers that need that done, because some outside quality control is obviously needed.

    However, I don’t think the sentences “slipped in” are incorrect, and I think the points made deserved to be made, even if the points that got deleted also deserved to be made and were not. I’m prepared to say what Richard Tol did was wrong, but not prepared to say what Richard Tol said is wrong. And although I’ll agree that Tol ’09s Figure 1 was shoddily done, between the regression line and the placement errors, I don’t yet agree that Tol ’09 as a whole is shoddy work or that the problems with Figure 1 invalidate his findings in that paper. I also think that public pronouncements by Tol that further moderate warming is likely to be only mildly damaging is representative of his field and not remotely relying only on his own work.

    And to be quite frank, I’m much more interested in what the papers are saying, and why, than I am in the integrity of the IPCC procedures. I think the IPCC is in the business of producing a political document more than a scientific document and see no hope that will change. This admittedly limits my ability to muster much outrage that something less alarmist somehow got in for a change.

    I think you’ve done a good job at pointing out some of the things that Tol has done wrong, but I get the impression that you think everything Tol has done is wrong. Even in Tol ’09 the use of Figure 1 was very limited and caveated. Subsequent utterances that are merely consistent with that figure should not be assumed to be derived from it and depend upon it. Tol’s coauthored papers with many of the other authors, I think it unlikely his knowledge of their work is exclusively drawn from that one figure.

  49. Dale Stephenson:

    Brandon, I was able to find a free copy of Bosello 2012 online. I don’t have the address in my history, unfortunately. The exact phraseology used is: “The reference climatic scenario is the A1B IPCC SRES scenario implying a 1.92°C increase in 2050 compared to the pre-industrial level.” I see no other clues.

    I found a free copy too. It seems perfectly in line with the interpretation I described. I don’t see anything to suggest they defined “pre-industrial” in the strange way you suggested.

    We don’t know that past estimates used “less evidence”.

    What? Seriously, what? Leaving aside the fact 20 years of study undoubtedly led to far more evidence being gathered, some of the papers specifically discuss having additional evidence. I don’t know how you can suggest newer papers might use the same amount of evidence as the older one, much less less evidence than them.

    OK, I’ll say it. What he did was wrong…. I’m prepared to say what Richard Tol did was wrong, but not prepared to say what Richard Tol said is wrong.

    I had been talking in general, as I indicated, but since you brought this up, I have to point out you’re not proving me wrong. You didn’t just say what he did was wrong. You devoted a little space to saying he was wrong then devoted far more space to other things, including partial defenses of what he did.

    You can tell a lot about people’s views by how they answer simple questions. For instance, suppose I were to say, “Do you agree what Richard Tol did is wrong?” The two direct answers are, “Yes” and, “No.” That’s not what I get though. The answers I get are, “No” and “Yes, but….” Generally, the latter answer is followed by a diversion to other topics. We saw it with Steve McIntyre when he diverted the focus of his agreement to a discussion of other examples of IPCC issues.

    In a similar issue, you say:

    However, I don’t think the sentences “slipped in” are incorrect,… I’m prepared to say what Richard Tol did was wrong, but not prepared to say what Richard Tol said is wrong.

    Try applying the standards you’ve tried to hold me to here. If you do, you’d have to criticize what Richard Tol wrote. The only defense you’ve ever offered for holding him to a different standard is he wrote fewer sentences than me, but that is a choice he made. I pointed out choosing not to provide information cannot be justification for providing bad information, and you ignored the issue.

    I also think that public pronouncements by Tol that further moderate warming is likely to be only mildly damaging is representative of his field and not remotely relying only on his own work….

    I think you’ve done a good job at pointing out some of the things that Tol has done wrong, but I get the impression that you think everything Tol has done is wrong. Even in Tol ’09 the use of Figure 1 was very limited and caveated. Subsequent utterances that are merely consistent with that figure should not be assumed to be derived from it and depend upon it. Tol’s coauthored papers with many of the other authors, I think it unlikely his knowledge of their work is exclusively drawn from that one figure.

    This goes back to another point I raised that you ignored. You suggested a claim in the IPCC WGII SPM isn’t based solely upon Tol’s work because there were many papers addressing the topic. You raise the same point here. You suggest Richard Tol has extensive knowledge he can base claims off of.

    The point you ignored is crucial. Richard Tol could not have generated numbers out of thin air. He can’t generate entire summaries of the state of his field out of thin air. When he makes claims, he has to provide a basis for them. The only basis there is is the work of his I’ve criticized. To claim he’s not basing these things on that work requires claiming he’s basing what he says, even specific numerical values, entirely off his personal opinion.

    If you think I’m wrong to attribute his views to that work, show what other basis he could have for them. Point us to some other basis on which he could claim a further warming of two degrees will lead to a 0.2% – 2.0% loss in GDP. What calculation could have possibly generated those numbers?

    For additional fun, you might want to explain how the IPCC SPM changed a “rise of 2.5C may lead to global aggregated economic losses between 0.2 and 2.0% of income” to “estimates of global annual economic losses for additional temperature increases of ~2°C are between 0.2 and 2.0% of income.” The former is taken from Richard Tol’s chapter, and it was added after the last round of external reviews, along with when he added the sections promoting his own work.

  50. I think I may have found a clue to what “pre-industrial” baseline would be. IPCC AR4 synthesis report has this: “a 1 to 2°C increase in global mean temperature above 1990 levels (about 1.5 to 2.5°C above pre-industrial)”. NOAA has 1990 at +0.40 anomaly, which would put the pre-industrial anomaly at about -0.10 anomaly. 2013 anomaly was +0.62 per NOAA, so the rise from “pre-industrial” would be about +0.72 through 2013. If Bosello followed the same definition, his +1.92 represents a further increase of about +1.2C from today — at 2050.

    Since Bosello lists 2001 as the calibration year (+0.55C), the increase over the 50 year period 2001-2050 would be about +1.27C, if he’s using the same baseline as IPCC AR4 synthesis report and calibrates to the NOAA-illustrated anomaly and not some other value.

    How hard could be to actually spell this thing out so we don’t have to guess? It’s not like “pre-industrial” temperature is remotely a fixed value in the first place.

  51. I think complaints about clarity would come better from someone who wouldn’t interpret “pre-industrial” as pre-1950.

    I’m all for clarity, but at a certain point, people are right to ask, “Did you even read what I wrote?”

    (Oddly enough, we had an example of that here just this week.)

  52. Brandon, you wrote of Bosello 2012:

    It seems perfectly in line with the interpretation I described. I don’t see anything to suggest they defined “pre-industrial” in the strange way you suggested.

    I agree, I don’t either. The strange way I suggested wasn’t based at all on the way they defined pre-industrial — they don’t define it at all. I was just working backwards from their final differential and looking for a rough match that was a “magic year”. I agree your trend-1880 is both more plausible and a whole lot more logical. But what I don’t know, and what you didn’t tell me, and what the link you gave in the other thread didn’t say, is what the temperature level in trend-1880 *actually is*. That’s the missing piece of information that could tell us what Bosello’s 2050 temperature actually is. As you point out, it can’t be safely assumed (as I did), that their end point is the same as the multi-model mean for A1B.

    But what I particularly wanted your opinion wasn’t just where pre-industrial is located, but whether the impact and the magnitude of change actually being estimated is from 2001-2050. I believe based on Note 3 and the description in section 2 that they initialize to economic conditions in 2001, follow A1B IPCC SRES for population/GDP to 2050, and that the -0.5% impact is compared to what the GDP would be in 2050 if there had been no temperature rise between 2001 and 2050. As it happens, Tol 2002 simulates from 2000 on with +1C at 2050, so if I’m right about Bosello’s period evaluation, the time period may match and the magnitude *may* be in the same ballpark. Bosello actually be as similar to Tol ’02 as we’re going to get as a comparison point.

    In response to my statement that we don’t know that past estimates used less evidence, you say:

    What? Seriously, what? Leaving aside the fact 20 years of study undoubtedly led to far more evidence being gathered, some of the papers specifically discuss having additional evidence. I don’t know how you can suggest newer papers might use the same amount of evidence as the older one, much less less evidence than them.

    For starters, they don’t all use the same kind of evidence in the first place. One of the new papers (the big negative outlier) is based on self-reported happiness, and monetizes the impact of the population moving further away from the ideal temperature as currently found in places like Columbia. Yes, it’s considerably newer than the pioneering studies of the mid ’90s, but it’s not all covering the same sort of thing. If the same study had been performed 20 years earlier with the same methodology, it would be just as compelling, such as it is.

    Now after I made that statement, I spent a few paragraphs with an illustrative example, comparing Tol 02 and Bosello 12. There’s a decade worth of difference between the two, but they are simulating the same time period, both ending in 2050. I’m sure every single input they have in common has been reworked and improved to the best of the users’ ability, but it’s easy to see from the impact charts that the biggest difference is in agricultural impact (biggest negative for Bosello, biggest positive for Tol). Is that a product of technical improvements? Is it a product of a (possibly) higher magnitude of warming for Bosello? Is that a result of being a CGE estimate instead of a Enumerative estimate? Or is it the result of different assumptions about the magnitude and effectiveness of adaptation in agriculture? I’m betting it’s the last of those. The up-to-date FUND damage curve (from the deleted IPCC figures) *still* shows positive agricultural impact at “+2.5C” and is as recent as Bosello.

    But the question also comes down to what the definition of “evidence” is. Fundamentally, all these estimates are the same thing — trying to quantify the monetized impact from a rapidly warming world. How is “new evidence” going to be generated for that in a world that is not rapidly warming? The output of models represents the opinion of the experts constructing the models. Until it demonstrates skill, I don’t regard it as evidence. YMMV.

    On the subject of Tol’s actions being wrong, you said

    I had been talking in general, as I indicated, but since you brought this up, I have to point out you’re not proving me wrong. You didn’t just say what he did was wrong. You devoted a little space to saying he was wrong then devoted far more space to other things, including partial defenses of what he did.

    My goal is not to prove you wrong. My main goal is to figure out what these papers are actually saying, so I can see for myself what the significance, if any, of Figure 10-1 actually is, and what the field is really saying, and how much confidence I can have in it. I’ve looked up a bunch of papers and I’m satisfied on some points, but certainly not all. What the papers say will influence how I feel about how accurate and/or misleading Tol’s vague statements are, but that’s a side-benefit, not my goal.

    You’re intelligent and have spent time in the literature, so you could, if you wanted, be helpful in figuring out things like what magnitude increase Bosello is actually evaluating. Talking about these things might even let you discover some new ammunition to bash Tol over the head with. Indeed, it’s already done so.

    But it’s your blog and your time, and it’s a bit of a sidetrack from what you made this post to discuss.

    I think complaints about clarity would come better from someone who wouldn’t interpret “pre-industrial” as pre-1950.

    Touche. However, that’s just being wrong, not unclear. Nor did my identification have anything to do with how I interpret “pre-industrial”. I don’t think it makes logical sense to describe a fixed temperature as representing “pre-industrial levels” in the first place.

  53. Dale Stephenson:

    The strange way I suggested wasn’t based at all on the way they defined pre-industrial — they don’t define it at all.

    Sure, but why should they? Everyone knows what “pre-industrial” refers to, more or less. If I write a paper and refer to the “Dark Ages,” I shouldn’t have to worry someone might interpret it as me referring to the 1700s.

    But what I particularly wanted your opinion wasn’t just where pre-industrial is located, but whether the impact and the magnitude of change actually being estimated is from 2001-2050. I believe based on Note 3 and the description in section 2 that they initialize to economic conditions in 2001, follow A1B IPCC SRES for population/GDP to 2050

    The redundant nature of publications in this field may be causing you more trouble than you deserve. I think it should count as unethical to duplicate your own work as much as these authors do, but that’s a matter for another day. For today, I suggest you look at this document:

    http://edocs.fu-berlin.de/docs/servlets/MCRFileNodeServlet/FUDOCS_derivate_000000001904/LIAISE_WP1.2012.pdf?hosts=

    It explains the economic benchmarks used for the model for the Bosello 2012 paper in question. This document happens to be another Bosello 2012 reference I came across while researching this issue. Believe it or not, I actually am pretty thorough once I get interested in an issue 😛

    For starters, they don’t all use the same kind of evidence in the first place.

    I’m fine with thinking some of the newer papers don’t use more evidence than some of the older papers. The problem is in your generalization. There are only two ways the newer estimates could not use more evidence: 1) No published papers used data collected over two decades; 2) The published papers which used the more recently collected evidence were not included.

    I don’t think anyone can seriously claim 1) is true. That means the only possible way to accept your statement, as a blanket truth, is to believe Richard tol ignored more recent work which used more data. One could believe he did that, but if so, nothing we’re talking about matters as he’s blatantly cherry-picking his data.

    But the question also comes down to what the definition of “evidence” is. Fundamentally, all these estimates are the same thing — trying to quantify the monetized impact from a rapidly warming world. How is “new evidence” going to be generated for that in a world that is not rapidly warming? The output of models represents the opinion of the experts constructing the models. Until it demonstrates skill, I don’t regard it as evidence.

    As I said before, the argument of what counts as “evidence” is just a semantic one at this point. One could easily replace the word with another like “information.” I don’t see it as mattering.

    As for what does matter, in the early 1990s, these people were working primarily with economic data for North America due to its availability/completeness. That’s why Tol 1994 and Fankhauser 1995 only did calculations for the United States (and Canada, in one) then extrapolated for the rest of the globe. Since then, data from far more countries has become readily available/accessible. That’s why newer papers have used data for many different areas.

    I don’t see a way one can believe newer work in this field is not based upon more data/evidence/information/whatever than older work.

    Touche. However, that’s just being wrong, not unclear. Nor did my identification have anything to do with how I interpret “pre-industrial”. I don’t think it makes logical sense to describe a fixed temperature as representing “pre-industrial levels” in the first place.

    I understand this. It’s just a matter of impressions. If somebody gives you the impression of coming up with crazy ideas, it’s hard to take them as seriously.

  54. Dale Stephenson:

    My goal is not to prove you wrong.

    In general, perhaps not. However, you made a comment directly in response to me saying nobody ever does X. When doing so, you portrayed yourself as doing X. It’s reasonable for me to point out you did not actually do X.

    My main goal is to figure out what these papers are actually saying, so I can see for myself what the significance, if any, of Figure 10-1 actually is, and what the field is really saying, and how much confidence I can have in it. I’ve looked up a bunch of papers and I’m satisfied on some points, but certainly not all. What the papers say will influence how I feel about how accurate and/or misleading Tol’s vague statements are, but that’s a side-benefit, not my goal.

    I get this, and I encourage it. I think a lot of what you say has value in and of itself. I think you may just need to pay more attention to the context in which you say it. Making a good point is great, but when you make it in response to something a person says, people will interpret it as a response to what that person said.

    This can be boiled down to a very simple difference. When a person says, “Yes, but…” they are undermining the point you make. When a person says, “Yes, in addition…” they are adding to the point you make. Adding additional material is always okay. The problem arises when people add new material in a way which undermines what was said.

    You’re intelligent and have spent time in the literature, so you could, if you wanted, be helpful in figuring out things like what magnitude increase Bosello is actually evaluating. Talking about these things might even let you discover some new ammunition to bash Tol over the head with. Indeed, it’s already done so.

    But it’s your blog and your time, and it’s a bit of a sidetrack from what you made this post to discuss.

    When you read my comment just above, you’ll see none of this has gotten in the way of me figuring out more about the Bosello 2012 paper. The reason is even if I were looking for ammunition to bash Richard Tol (I’m not), that doesn’t change the fact I want to understand things. I want to learn.

  55. I complain that Bosello 2012 doesn’t supply a definition of pre-industrial, and you reply:

    Sure, but why should they? Everyone knows what “pre-industrial” refers to, more or less. If I write a paper and refer to the “Dark Ages,” I shouldn’t have to worry someone might interpret it as me referring to the 1700s.

    True, everybody knows what “pre-industrial” refers to, more or less. It would be prior to the Industrial Revolution. But the paper isn’t about pre-industrial conditions at all, the only relevance of pre-industrial to the paper is establishing what the temperatures simulated in 2001-2050 actually were. Without knowing what the “pre-industrial level” temperature is, it’s not possible to do that, and as the actual pre-industrial era had neither a constant temperature level nor a global instrumental average, it’s not even possible to make good guesses.

    However, in my flailing about trying to identify a start year for the temperature increase, there were a number of errors I made that rendered the exercise of trying to pick a start year fruitless, and they weren’t small ones:

    1) Assuming the end year was the multi-model mean so we could calculate backwards in the first place.
    2) Assuming the “pre-industrial” *year* actually mattered to the exercise, instead of just its temperature.
    3) Assuming that “pre-industrial” was picked by Bosello instead of being a standard in the field.

    Assuming Bosello’s pre-industrial differential is exactly the same as Richard Tol said in the other thread (+0.61), we have the necessary values to do the exercise. As HaroldW posted in the other thread, +0.61 is the differential between 1850-1900 and 1986-2005 reference periods (using HadCRUT4 according to TS.2.2.1 in the WGAR5 TS). From the legend to Table TS.1 we find that the 1986-2005 reference period is +0.11 compared to the 1980-1999 reference period used in AR4 (which neatly makes a +0.50 differential for the same exercise based on AR4). So compared to the 80-99 reference period used in AR4, Bosello simulates a rise to +1.42C. (This is certainly close to A1B multi-model mean, for the little that’s worth.) Given a start around +0.2C, we’re talking about a a rise of +~1.2C over 2001-2050. This seems unlikely.

    So if we made our own plot (for illustrative purposes, NOT for regressions) of studies and plopped Bosello on it, where should its -0.5% be placed on the X axis?

    from “pre-industrial” (1850-1900) +1.92C
    from AR5 reference (1986-2005) +1.31C
    from AR4 reference (1980-1999) +1.42C
    from “today” ~+1.15C?
    over estimated period (in this case 2001-2050) ~+1.2C

    One down, many to go. To the extent it makes sense to put these studies on the same chart at all, do you think it is more informative to plot them by endpoint temperature or by the magnitude of the warming actually being evaluated?

  56. Dale Stephenson:

    Without knowing what the “pre-industrial level” temperature is, it’s not possible to do that, and as the actual pre-industrial era had neither a constant temperature level nor a global instrumental average, it’s not even possible to make good guesses.

    Aye. I think Bosello 2012, and any other paper doing this sort of analysis, should be required to provide data clearly showing what temperature path they used. I’d say the same for any other path-type variables they might use.

    That just doesn’t have anything to do with them not defining “pre-industrial.”

    So if we made our own plot (for illustrative purposes, NOT for regressions) of studies and plopped Bosello on it, where should its -0.5% be placed on the X axis?

    The numbers you give seem reasonable enough. I’m not sure there’d be much value in such an exercise though. With the various papers using different estimates for GDP growth, I don’t see how a calculation of GDP loss could be comparable across temperatures.

    If one paper says a +1.5C change causes a 300% GDP growth to only be 295%, while another says a a +1.5C change causes a 450% GDP growth to only be 440%, how do you compare them?

    One down, many to go. To the extent it makes sense to put these studies on the same chart at all, do you think it is more informative to plot them by endpoint temperature or by the magnitude of the warming actually being evaluated?

    No clue. I don’t think either is particularly informative as you’re comparing results by two variables when there are half a dozen (if not more) variables which differ between them. One of the two you suggest might be more informative than the other, but it’d depend on the methodologies of the individual papers.

    If I had to take a guess, I’d say endpoint temperatures if you can find them for the same year. If not, I’d go with total magnitude.

  57. I had a quick thought. If you have total magnitudes for different years, you could calculate a rate of warming for each estimate then align results by that. There are lots of issues with doing that, but I think it would give you a more consistent comparison.

  58. Aligning by rate would make a good comparison when the damage is linear or close to linear per degree. That probably applies to some of the estimates though certainly not all. But it does address an issue with different estimated lengths. Being 2% down in GDP after a 50-year warming period isn’t really the same as being 2% down after a 100-year warming period.

    If one paper says a +1.5C change causes a 300% GDP growth to only be 295%, while another says a a +1.5C change causes a 450% GDP growth to only be 440%, how do you compare them?

    Only vaguely, as in “two models estimating GDP damage from +1.5C and massively different GDP end products both think the impact will be relatively small.” You certainly couldn’t read much into the differences between the predictions.

    I think this is where the emissions scenarios are intended to be useful, since they provide population and GDP information as well as emissions. If different estimates follow the same scenario for the same assumed base GDP, then they actually would be reasonably comparable–at least until we start comparing the coverage and assumptions. Naturally, there’s multiple scenarios per family. Also ideally you’d want them not just to have the same emissions/population/GDP but also the same assumed temperature path. If every estimate uses a different model for what the regional temperature patterns are, it’s yet another way to make the estimates incomparable.

    Looking at SPM 1-a of the 2000 SRES, I see the World GDP for 2050 in the A1 family (high growth) to be $164T (1990 dollars) for the A1FI, $181T for the A1B, and $187T for the A1T. The difference in GDP (by percentage) between the fossil-intensive path and the non-fossil path in the SRES is much, much larger than Bosello’s estimate on removing temperature change from the A1B path. If true, we’d want to abandon fossil fuels for the massive economic benefits that action grants regardless of effect on temperature. All the emission scenarios are based on a much richer future world (1990 at $20T). If the median expectation of GDP 24.8 times as large in real dollars in 2100 as in 1990 represented the true no-temperature-increase result, I don’t know how much we’d care if temperature damages knocked that down by 5%. Figure 2 in the paper you linked on Bosello’s economic inputs says they used an A1B scenario for the benchmark with a world GDP growth of 500% though 2050, a figure that points the -0.5% negative impact from the temperature rise in a bit of perspective. The USA line shows a growth of over 2500%, while Figure 4 in Bosello ’12 shows a final estimated impact of about -0.1% for the USA. This is not the sort of economic projection that would drive congress to take action on emissions today, I think. (However, the massive economic growth does mean that the tiny damage impacts represent a much larger amount of damage in totals, as opposed to percentages.)

    I rather get the impression that if the damage estimates are within an order of magnitude of being correct, and the SRES economic assumptions are anywhere close to being correct, that any mitigation strategy that risks negative impact the massive future growth *at all* would be a staggeringly bad idea. But I don’t know that there’s any good reason to think that the economic assumptions will predict any better than the GCMs have lately.

  59. I think the fit should not be against som arbitrary quadratic fit but aganst an econometric function that actually in mathematical terms describes what is happening. Say something like a kuznettz funcition or an investment vs. profit relationship which is used to calculate break evens or return on investments. Furthermore how are the two points at 1 degree evaluated and weighted surely these ponts themselves are the result of aggreations??

  60. Hans Erren, I agree. There’s no reason to think Richard Tol’s decision of how to fit the data is meaningful, much less realistic. Even if his data were comparable, the way he compares it is completely arbitrary.

    But his data isn’t comparable. You’re right to ask how various points are “evaluated and weighted” given their differences. As Dale Stephenson and I discussed, the problem goes even beyond what you mention. These various damage estimates are done for different time periods, economic growth rates, and even baseline temperatures. There are so many differences between the scenarios they represent we could never hope to draw anything but the crudest estimates by comparing them. We certainly couldn’t draw valuable, quantified estimates with relatively narrow uncertainty estimates.

    But Richard Tol acts as though we can. I think he’s written something like a dozen papers doing this same thing, just fitting different models to the data. I was going to write about his latest approach to doing it, but I lost interest despite its absurdity. It seems people aren’t interested in critically examining Tol’s work. It seems skeptics don’t want to because he’s a rock-star to them while people on the other side don’t want to because his work is part of the “consensus,” and you aren’t supposed to point out problems with the “consensus.”

    It’s a shame really. Tol’s latest work is hilarious. In at least one recent paper, he’s published results which show uncertainty grows with the amount of data he uses. That is, his latest work argues having more data makes us less sure of our results. And according to him, that’s a good thing as it means we can examine subsets of the data instead of all of it in order to figure out what is the right answer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s