Tol Uses Libel Accusations to Silence Critics

A lot of attention has been given to the libel lawsuits Michael Mann has filed. Much of it has been negative. A lot of people think using a libel lawsuit to intimidate one’s critics is a bad thing. I agree. That’s why I’m disturbed by what I’ve found out about some of Richard Tol’s actions.

I, of course, have criticized Tol before. I’ve criticized him in the past for making nonsensical arguments. I’ve also criticized him for abusing the IPCC process to promote his own work and conclusions (by making major revisions without any oversight by the IPCC process). I’ve even criticized him for whining he was being censored because people applied normal moderation policies to his comments. But despite all that, I’ve never seen anything quite so disturbing from him as what I came across today. It began when I saw this tweet:

Naturally, I was curious why Richard Tol was talking about such serious penalties. I read back through the conversation, catching this highlight:

Here Tol explicitly claims (Frank) Ackerman admitting to libeling him. He also claims David Stern acknowledges Ackerman libeled him. Based on that, Tol believes he should receive hefty compensation. He believes Stern (the editor) should be fired. All because they admitted Ackerman had libeled Tol.

Only, Tol is just making that up. Neither ever admitted there was any libel. Stern has specifically said:

I certainly never agreed to Tol’s claims that this was formally “libel” etc.

Tol defended his claim Stern had admitted there was libel by saying Stern just wouldn’t use the word. I don’t understand how that works. I’m not sure how someone explicitly stating they didn’t say something was libel can be reconciled with claiming they admitted it was libel.

But whatever. It’s Twitter. It’s not a big deal. What’s a big deal is Tol made the same accusation to Ackerman’s employer. He wrote them a letter, saying:

I draw your attention to the attached letter by an associate editor of Ecological Economics. In the letter, Professor Stern notes that a paper by Ackerman and Munitz contains statements that are false, and that the authors knew these statements were false at the time they submitted the final version of the paper.

I understand that Frank Ackerman recently joined your company.

You should know that Mr Ackerman submitted the same falsehoods in a chapter of an edited volume to be published by Oxford University Press, and in a book to be published by Routledge. I have asked the publishers to amend the texts before publication.

I urge you to remind your employee that libel is against the law.

Contacting Ackerman’s employer to accuse him of breaking the law is a clear attempt at intimidation. It’s the sort of thing you’d do if you were trying to get him fired. That is not okay. It’s wrong. It’s especially wrong given Tol completely misrepresented Stern has said.

The response from Ackerman’s employer is heartening:

Synapse Energy Economics is delighted that Dr. Frank Ackerman has joined our staff, and is proud to advertise his research accomplishments and publications – including his peer-reviewed article discussing the FUND model, which Richard Tol has attacked. In an e-mail to me, Professor Tol made the absurd and unsubstantiated suggestion that the article somehow constitutes libel. He urged me to remind my employees about libel law; I urge him to consider the damage that could be done to his reputation by becoming known for false accusations of libel.

One could see almost the exact same thing being written to Michael Mann. It’s the correct response to someone throwing around libel accusations in order to intimidate his critics into silence. And by intimidate, I mean Tol indicates he’s willing to sue if some other resolution isn’t reached. A letter of his to the publisher of Ackerman’s piece included:

While I am still hoping for an amicable solution, it is time to prepare for a more formal resolution. For the moment, I will put aside the option of a civil case for infringement of copyright and defamation. I will seek arbitration, however.

I have no idea what “infringement of copyright” Tol believes there may have been. I can’t help but be reminded of a similar threat used to try to silence me. I assume Tol’s threat is as baseless as that one was. Still, this shows Tol is willing to go around threatening to sue people if they don’t accede to his demands.

Frank Ackerman published a scientific article criticize Richard Tol’s work. Tol responded by saying he was libeled, claiming people were lying, threatening to sue, repeatedly misrepresenting what people say and now claiming people who discuss the controversy in ways he doesn’t like are lying too:

It’s Michael Mann again, just without the deep pockets. I’m amazed I’m only hearing about this now, nearly two years after the fact. Richard Tol has been receiving a lot of attention lately. I’d have expected this to come up. I guess skeptics just don’t mind blatant intimidation tactics coming from people they like.

For more on this topic, see here. For the origin of the current discussion, see this article. Be warned, that article is wrong on multiple points, and biased throughout.

About these ads

58 comments

  1. This is what happened.

    Ackerman downloaded our code, thought a found an error, showed us his results. We pointed out that he had not found an error at all.

    Ackerman submitted the paper nonetheless, with an editor that is an old friend. Paper was published.

    We complained. Our response shows that Ackerman did not find what he claimed he found. Another editor of the same journal investigated the paper and the email exchanges. He concluded that Ackerman did not find what he claimed he found, and that Ackerman had suppressed information that showed that. In his rejoinder, Ackerman admitted that he had indeed not told the whole truth.

    Writing things that you know are false and that damage someone’s reputation is libel.

    I never threatened to sue Ackerman. You may notice, for instance, that he never produced a letter from my lawyer.

  2. Richard Tol, I’m sure you believe “Ackerman did not find what he claimed he found.” However, you’ve also shown you believe things like, finding patterns in sorted data shows a paper is unacceptably flawed.

    You’ve also shown you’ll criticize a paper based upon you having no idea what the methodology it uses is. I can pull up the links to your criticisms of Ludecke et al’s paper discussed at Judith Curry’s blog to show this. There, you claimed the use of detrended fluctuation analysis was wrong because what people were interested in was whether or not the trend existed. Anyone with the slightest understanding of DFA could immediately see you had no idea what you were talking about. All you did was see the word “detrended” and decided that meant the authors were wrong.

    Given I can provide example after example of you making ridiculous arguments, refusing to address anything your critics say in any meaningful fashion, I’m not inclined to discuss what you say about this situation. I can show what you say is wrong, but you’ll just run away from the discussion like you always do. Rather than repeating that sequence for the dozenth time, I think I’ll just stick to looking at things like: you seemingly tried to get a guy fired.

  3. Brandon,
    I have to say: It looks like Richard might have had a prima facie case to go to court. Whether he would win or not would depend on a particular fact: Did the FUND model’s result potentially suffer from a divide by zero error. But this would not be a “baseless” suit. It might be pesky, and some might not suits are the right way to go about it, but that’s different from “baseless”.

    I assume the relevant discussion by Stern is this

    The main point of contention is around Section 4.1 of the paper,
    which claims that the results of the FUND model could be affected
    by a division by zero problem. In my investigation, I had access to correspondence
    between Anthoff and Tol and Frank Ackerman prior to
    publication of the paper. In this exchange, Anthoff and Tol had told
    Frank Ackerman that the apparent division by zero problem was in
    fact addressed by the FUND model and the results were not substantially
    affected by it. I also relayed Tol’s concerns to Ackerman and received
    a reply from him. Based on the responses I received and the
    previous correspondence, I determined that some statements in the
    paper were problematic and that Ackerman and Munitz did not report
    in their paper the information they had received from the
    model developers about the division by zero issue.

    1) If the results of FUND could not be affected by division by zero, that statement was false. Since FUND is a research code, and Tols academic reputation would be affected by such a claim, the statement would be defamatory. If a statement is both false and defamatory, the issue would be libel. So, this would make a basis for a claim. So: prima facie case could be made. In court, the question would be one of fact. But Tol presumably feels pretty confident that FUND was not affected by this division of error issue. If so, the claim of libel would not be baseless– it would be one he could pursue in court and would win if he could convide a jury about this divide by zero issue..

    2) Richard told Ackerman his statement about the divide by zero error was false. (I think we can be pretty confident that Richard told Ackerman what he thought even if we didn’t know that Stern also communciated with Ackerman.) So: if Richard is correct about the division by zero problem, then Ackerman knows Ackerman’s claim about the division by zero problem is false. OTOH: if Ackerman thinks the division by zero problem existed, Ackerman would claim his statement about it was true. This would potentially be a matter for court (though heaven forbid a poor jury had to look at code!)

    3) Stern observed that although communication occurred, Ackerman doesn’t even mention the issue in his paper. So, he neither explains it is real– showing it– nor takes back the claim. He also doesn’t tell readers that the authors of FUND deny his claim. This would be ‘problematic’ as, given the importance of this issue to potential users of FUND (and the reputations of the authors of FUND) Ackerman should be doing one or the other.

    4) Stern doesn’t quite say Ackerman committed libel possibly because quite likely Stern doesn’t himself know whether the division by zero problem exists. Or he doesn’t want to say whether it exists or not. (Note: I have no idea whether the FUND models results were potentially affected by divide by zero errors. So I can’t begin to say whether libel occurred.)

    5) Based on this he posts the information about the FUND dispute, thereby preventing potential damage to Tol’s reputation. In contrast, if the journal had not written a letter, the claim FUND results potentially suffered from a divide by zero error would then appear in the formal literature.

    It seems to me at this point, if Ackerman wants to clear himself of the charge that he committed libel, he ought to show that FUND results actually suffer from a divide by zero error. If the potential exists, it should be easy for Ackerman to show. That would be a much more sensible course than asking people to sign letter in support of his right to… do what? Claim models suffer from divide by zero errors? One only has that right if the model does suffer from them.

    If Ackerman cannot support this claim about a divide by zero error, he should stop claiming the model does suffer from a divide by zero error because if it does not suffer from such an error, then his claim is libelous. If, on the other hand, he demonstrates the divide by zero error, Tol’s reputation will be diminished– but it will not be libel because the claim will have been true.

  4. lucia, even if we accept the idea Richard Tol could offer a basis for a libel suit, I can’t imagine any argument for suing for copyright infringement. That’s what I had in mind.

    As for libel, I didn’t want to discuss it because I’m tired of chasing down Richard Tol’s made up claims. The standard for Tol’s suit in this case would be such tons of scientists could be sued for their scientific papers. Michael Mann and at least a dozen of his co-authors could be sued under this. They could use the same argument to sue people like Steve McIntyre.

    That’s absurd behavior. It doesn’t matter if it could work in court. It’s nothing more than saying, “Someone criticized my work, so I’ll sue them.” It doesn’t matter who is right or wrong. If threats like that are considered okay, nothing could prevent people from being dragged into court every time they disagreed with Michael Mann. I mean, Richard Tol.

    Sure, being right in your criticisms might mean you could win a lawsuit, but what kind of discussion can be held if everyone has to worry every criticism they might publish could get them sued?

  5. Since it has come up, I guess I should discuss Richard Tol’s claim of libel a bit.

    Tol’s defense against Frank Ackerman’s criticism is not that his model could not be affected by divide-by-zero errors. It is indisputable his model could be. Tol’s defense is such errors could not affect his results because (he says) the Monte Carlo runs he did with his model were screened after-the-fact, in an undisclosed, undocumented step.

    Nobody who downloaded the FUND model could have known manual screening of results was necessary. They couldn’t have known which results should be screened out and which shouldn’t. That means Tol’s defense couldn’t possibly apply to the model itself. If what he said were true, it would only apply to results he got, in part, by using the model.

    Put simply, Ackerman couldn’t be guilty of libel because what he said was absolutely true. Tol’s defense relies entirely upon misdirection and misrepresentations.

    Interestingly, even if people ignore factual disputes, it seems to me Tol would be guilty of libel under his own standards. Tol falsely claimed Stern and Ackerman both admitted to the libel. He was well aware neither did. Not only had he been told that in the past, his responses defending his claim showed his knowledge as well.

    Would anyone care to guess how Tol would have responded if Ackerman had suggested he could sue?

  6. lucia, even if we accept the idea Richard Tol could offer a basis for a libel suit, I can’t imagine any argument for suing for copyright infringement. That’s what I had in mind.

    Ahhh! Well, I have a better imagination. FUND is a program. I think Tol wrote it. Given copyright law, copyright would exist and possibly belong to Tol. Perhaps Tol would be alleging some sort of copying of the program without permission. Whether that could form the basis of any winning suit I can’t say as I don’t know enough details. But having a vivid imagination, I can imagine a copyright suit over copying of a computer program.

    The standard for Tol’s suit in this case would be such tons of scientists could be sued for their scientific papers.

    Maybe. But mostly, papers don’t allege something like a program’s results suffer from divide by zero errors. They are discussing interpretation of data, testing of hypotheses, advancing theories, and such. So I don’t really think this opens the door to tons of scientists getting sued.

    More generally: I don’t think a statement being in a peer reviewed paper should be considered insulated from suits. The most relevant question: was there libel or not. If scientists really were publishing libelous statements under the cover of ‘scholarship’, the libel suits would probably be a healthy thing.

    But here I think we are just discussing normative issues in a general way.

    being right in your criticisms might mean you could win a lawsuit,

    It depends on what you were criticizing. If you were criticizing a claim that was false and defamatory, then yes, you could win a suit. Otherwise, not.

    In the 2nd comment you are discussing items that would be ‘matters of fact’ if a case arose. I’m not at all familiar with the model itself, how Richard used it and so on. So, I’m not up to speed enough to know whether your version of what happened is entirely correct. (I’m not suggesting I’m disputing it either– I am unfamiliar with the facts of the case. I don’t plan to spend time become familiar either!)

    With respect to this

    Put simply, Ackerman couldn’t be guilty of libel because what he said was absolutely true

    If this went to court, that would be addressed in the ‘finding of fact’ part– which is the trial. But our legal standards are that if Tol showed a prima facie case, he would get his day in court. My impression is this is your standard in saying Mann deserves his day in court.

    Tol falsely claimed Stern and Ackerman both admitted to the libel. He was well aware neither did.

    I don’t think Tol claimed Sterm libeled Tol, nor that Stern admitted to libeling Tol. That’s not what you mean… right?

    If this went to court, I think Tol would say that Stern’s statements amount to saying Ackerman libeled Tol. Given the statement’s I’ve seen, I think if I were on the jury, I’d see Stern’s statement in the journal to describe Ackerman’s actions as likely libel without using the word. Others might not. But that would be an issue during the finding of fact. Given this, I don’t think Tol could be said to be ‘well aware’ anything he claimed was false.

    I haven’t found a quote by Ackerman that Tol interprets to mean Ackerman admitted libel. I found a link to a paywalled article. I wasn’t interested enough to spend $39 to read it.

    Luckily, so far, no one has actually sued anyone. No matter what might happen in court, a suit would be a nearly pointless time sink for all.

  7. lucia:

    Maybe. But mostly, papers don’t allege something like a program’s results suffer from divide by zero errors. They are discussing interpretation of data, testing of hypotheses, advancing theories, and such. So I don’t really think this opens the door to tons of scientists getting sued.

    Where would you draw the line? As an example, here are two criticisms:

    “There was a bug in your code which let zero be in the denominator in one module.”
    “Performing principle component analysis over segment series instead of whole series is wrong.”

    In my experience, people are more sympathetic to programming bugs than mistakes in methodology. That’d seem to mean you think Michael Mann can sue anyone who says his original hockey stick used a wrong methodology. What about all the people who criticize GCMs? By damaging the reputation of climate modelers, do they all open themselves up to lawsuits?

    More generally: I don’t think a statement being in a peer reviewed paper should be considered insulated from suits. The most relevant question: was there libel or not. If scientists really were publishing libelous statements under the cover of ‘scholarship’, the libel suits would probably be a healthy thing.

    I don’t think peer-reviewed papers should be considered insulated from suits. I simply think you should be able to say a person is wrong without getting sued. I don’t think merely claiming a person made a mistake damages their reputation to the point where a lawsuit should be possible.

    It depends on what you were criticizing. If you were criticizing a claim that was false and defamatory, then yes, you could win a suit. Otherwise, not.

    You misunderstand. I was saying if you got sued for criticizing a person’s work, you might be able to win as the defendant in a suit. The point was even if a defendant can win such a lawsuit, the costs/risks of such a suit would still have a chilling effect.

    My impression is this is your standard in saying Mann deserves his day in court.

    The difference is Michael Mann was accused of using fraudulent work (amongst other things). Tol was accused of screwing up. I think there’s an enormous gulf between the two.

    I don’t think Tol claimed Sterm libeled Tol, nor that Stern admitted to libeling Tol. That’s not what you mean… right?

    No. Tol claimed Ackerman libeled him. Tol then claimed both David Stern and Ackerman admitted to that libel. As in, Tol claimed Ackerman and Stern both admitted Ackerman libeled Tol. That’s not saying Stern himself was guilty of libel.

    (Though it’s also not too far from it. Stern was the publisher of the piece, and Tol has suggested he should have been fired.)

  8. By the way, I’m not arguing Richard Tol was wrong to suggest he could file a libel lawsuit. My remark about his threat being baseless was limited to his suggestion he could file a copyright suit (hence why I only discussed the copyright portion of his threat). It may be possible that he’d have a basis for a libel lawsuit. I don’t think such a suit should be possible, but I don’t know enough to say for sure that it wouldn’t be.

    What I do know is attempting to intimidate one’s critics into silence is wrong. Whether or not Richard Tol could file a libel lawsuit over a criticism like this, I think it is horrible he’d be willing to do so. Nobody should have to face a threat of legal action for simply pointing out mistakes they believe they’ve found in a person’s work. Even Michael Mann hasn’t gone that far in trying to intimidate people.

    Nevermind the fact Tol says far worse things about people on a regular basis.

  9. (Though it’s also not too far from it. Stern was the publisher of the piece, and Tol has suggested he should have been fired.)

    I don’t interpret Tol’s tweet to suggest this. It was a response in a series of tweets, and earlier on there is an “what would happen if ‘Y’ rhetorical question tweet”. But everyone agreed “Y” had not occurred. (This is a problem with both tweets and arguing using rhetorical questions. Clarity is often lost.)

    In my experience, people are more sympathetic to programming bugs than mistakes in methodology. That’d seem to mean you think Michael Mann can sue anyone who says his original hockey stick used a wrong methodology. What about all the people who criticize GCMs? By damaging the reputation of climate modelers, do they all open themselves up to lawsuits?

    You are focusing on the ‘sympathy’ issue. But there is another issue that is important in libel. That is the ‘fact/opinion’ distinction.

    Existence of a bug is a ‘fact’. Appropriate methodology is very often ‘an opinion’ to the extent that at the moment I can’t think of choices in methodology that could be called matters of ‘fact’. (I’m sure they exist). One needs very extreme cases before preference in choice of methodology can become an issue of ‘fact’. Opinions cannot be libel.

    With respect to ‘fraud’ in the hockeystick: the issue with Mann is whether he chose his methodology based on the results he ‘liked’. That is: some of the accusations swirling around involve cherry picking of all sorts– in one case, cherry picking of methodology. This is different from merely criticizing the methodology ‘qua’ methodology. It also changes the accusation from one about Mann’s opinion about what is a good methodology to a habit of cherry picking.

    On your last question: Depends what the person accused the climate modeler of. In my life I’ve overheard accusations of behavior on the part of some modelers that would absolutely constitute fraud. Basically, I heard one person say an unnamed graduate student who I did not know had programmed a code to shunt the actual calculations portions in the source code and include a ‘call results’ module that spit out results the graduate student ‘wanted’. That is: the results were not the output of the physical model. The student was able to show his advisor all sorts of ‘source code’– and the advisor evidently took it on trust the output came from the source code. (I don’t know if the accusation was true, nor the names of anyone involved.)

    No one has accused any named climate modeler or group of climate modelers of this sort of thing, but if such an accusation was made and it was untrue, that would be libel. Any modeler who was specifically connected to the accusation would have every right to sue the person who made such an allegation. Might the suit intimidate people into shutting up? Sure. But otherwise there is little remedy and people ought not to have the right to make such allegations if they are untrue.

    On the copyright– I’m not sure whether there is any basis. I’d assumed FUND was a code. But it appears it might be the mathematical model only? If the latter then I don’t see a basis for a copyright suit. But perhaps there is. I dunno.

  10. lucia:

    I don’t interpret Tol’s tweet to suggest this. It was a response in a series of tweets, and earlier on there is an “what would happen if ‘Y’ rhetorical question tweet”. But everyone agreed “Y” had not occurred. (This is a problem with both tweets and arguing using rhetorical questions. Clarity is often lost.)

    I’m not sure everyone agreed Y had not occurred. Y was, “Frank Ackerman libeled Richard Tol.” Tol has said, more than once, that David Stern admitted Y. He walked it back a bit on Twitter by basically saying, “Well, he didn’t use that particular word,” but it’s not clear to me he thinks that means he agrees Stern never admitted Y occurred.

    You are focusing on the ‘sympathy’ issue. But there is another issue that is important in libel. That is the ‘fact/opinion’ distinction.

    Existence of a bug is a ‘fact’. Appropriate methodology is very often ‘an opinion’ to the extent that at the moment I can’t think of choices in methodology that could be called matters of ‘fact’.

    I don’t see how that issue could be at play. The standard in libel law is that a reasonable person could interpret the offending statement as a factual statement. Statements like, “The methodology is wrong,” can certainly be interpreted as factual statements.

    But even if we didn’t accept that can be interpreted as a factual statement, something like, “The methodology overstates our certainty” certainly can be. So can, “The methodology biases results to be too high.” Or, “They implemented the methodology incorrectly.”

    People reading scientific criticisms/claims are often going to read them as factual assertions. I can provide tons of examples from the MBH/MM papers which are stated every bit as factually as, “There’s a bug.”

    On the copyright– I’m not sure whether there is any basis. I’d assumed FUND was a code. But it appears it might be the mathematical model only? If the latter then I don’t see a basis for a copyright suit. But perhaps there is. I dunno.

    It was code, and I don’t dispute it could be protected by copyright. I just know Ackerman downloading and using it to test it was the intended purpose of making the model openly available. He didn’t do anything with it that could trigger copyright infringement.

  11. Here’s Y it involves the question of what would happen if Stern used the ‘l’ word as opposed to describing someone as having committed all the acts that constitute ‘l’:

    @readfearn So what would happen if a representative of the most profitable company in the world admits to libel?— Richard Tol (@RichardTol) May 27, 2014

    So what would happen if a representative of the most profitable company in the world admits to libel?

    I read the following as an explanation of why Stern did not use the ‘l’ word but instead described behavior that– in Richards (and the minds of many) would constitute ‘l’.

    Let me spell it out: Hefty compensation is paid, editor fired, publisher demoted.

    But you seem to read it as “Tol has suggested he [Stern] should have been fired”. Or at least that’s the tweet I think you are referring to when you say you think Tol suggested that.

    Now maybe you are right– or not. Obviously, I think my interpretation is correct; maybe I’m wrong. But I try to be careful over-interpreting Twitter especially if you don’t go far enough back in the stream or they get disconnected. It’s a medium rife with the potential for misunderstanding– and in this case, I think you misunderstood.

  12. Brandon,

    “The methodology overstates our certainty”

    BTW: Sure. And I’ve looked at Ackerman’s paper and scribbled out a little Taylor series and thought about it. I think that claim by Ackerkman is quite likely about ‘overstates our certainty’. (But I need Tols paper to see precisely what Tol’s paper said. But I may end up with a post on this. It will involve Taylor series to compare closed form solutions to Monte Carlo and discuss what statements mean and what one might interpret them to mean. But before I can write this, I’m going to ask Richard for his paper– and I’m emailing him.)

    But it may be that under US law there would be no libel because even though this looks like a claim of fact, this is really an conclusion based on a series of stated facts. So people can investigate the facts and form their own conclusion. (I’d need to find the Volokh post on this if you are unfamiliar with the issue.)

    That said: UK law may differ on the whole “conclusion based on facts”. So it might be libel in the UK. I have no idea. (The UK has had some rather screwy libel laws and are only just starting to catch up to where we were in colonial times when colonials decided that truth was a defense to a libel claim. But… alas… I’m just not going to waste time becoming an expert on those.)

    He didn’t do anything with it that could trigger copyright infringement.

    I tend to agree with you– at least under US law. UK? Dunno.

  13. lucia:

    That’s not Y!

    You’re right. I’m not sure why I mixed those up. I knew what I had in mind, but it was different from what you had in mind. Anyway, my point was Richard Tol said:

    So what would happen if a representative of the most profitable company in the world admits to libel?

    Let me spell it out: Hefty compensation is paid, editor fired, publisher demoted.

    This was shortly after Tol said David Stern had agreed with Tol’s accusations of libel. If Stern agreed the libel happened, and if him agreeing the libel happened should get him fired (I mistakenly called Stern the publisher before; he was actually the editor), the only conclusion I can draw is Stern should get fired. According to Tol, Stern just avoided that by not using the “l-word.”

    The only way I can see interpreting that differently is if we point out Tol said “admits to libel” not “agrees a piece he was responsible for was libelous.” That wouldn’t make any sense in context though. Tol had referred to the latter, not the former, throughout that conversation.

    Tol said Stern should be fired if he admitted the libel happened, and Tol said Stern admitted the libel happened. That seems a pretty clear suggestion Stern should have been fired.

    By the way, what Stern described was not libel. He didn’t allege any falsities, a minimum requirement for libel.

    BTW: Sure. And I’ve looked at Ackerman’s paper and scribbled out a little Taylor series and thought about it. I think that claim by Ackerkman is quite likely about ‘overstates our certainty’.

    Not that I disagree with this statement, but I don’t see how it follows from what was said. I was discussing hypothetical examples, not any specific one. The point I was making is many criticisms I’ve seen of methodologies are (as far as I can tell) as factual as saying there’s a bug in someone’s code. In other words, I can’t see a relevant difference between what Frank Ackerman said and someone saying, “Michael Mann’s implementation of principal component analysis was faulty and cherry-picked certain patterns.”

    As far as I can tell, the latter is completely factual and far more damaging to a reputation. Would you say Mann could sue for it?

  14. Brandon
    Once again, when you are imagining the only other thing it might mean you are focusing on a different aspect of the tweet than I do!

    You write:

    Tol said Stern should be fired if he admitted the libel happened,

    Another interpretation involves replacing “should” with “would”. That is: the journal would fire him if he admitted something that resulted in a judgement against the journal. That was my interpretation of Tol’s meaning, and it’s is not the same as saying the journal should fire him for admitting something if it had indeed occurred. Rather: Stern has a strong motivation not to use that word because he doesn’t wish to be fired and he thinks (or suspects) it would occur. So my reading: This was not a ‘normative’ statement about what ‘should’ happen but an observation about what ‘would’ happen.

    Possibly we could be more certain if the statement was longer than a tweet.

    Not that I disagree with this statement, but I don’t see how it follows from what was said. I was discussing hypothetical examples, not any specific one.

    I didn’t say it follows from what you said. Possibly I should have elaborated what I meant by “sure” in “BTW: Sure”. I concede that could be a factual statement. But I happen to think as a factual statement, I think that particular statement is incorrect if applied to Tol’s FUND. (Were we to return to the libel issue: in that case, if that statement was somehow ‘defamatory’ it could now be part of a libel– provided it wasn’t an inference made from disclosed facts.)

    “[X]’s implementation of principal component analysis was faulty and cherry-picked certain patterns.”

    Ok. I concede that criticism of methodologies can amount to statements of fact and this is one. But I would say this one could be libel if it’s untrue. I don’t have any gripe with people suing for libel if a statement of fact is false and defamatory.

    I guess the reason criticism journal articles generally don’t — and infact can’t — result in libel claims is that they are usually surrounded by statements of fact that make a statement a “conclusion based on facts”. So, for example, in the US, if someone merely made the claim you made up for your example but surrounded it with facts that explained why the person making the conclusion believed the claim, then the statement would not be libel. But if they just made it, it could be.

    The issue if inference based on disclosed facts and libel is discussed here:

    http://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/05/16/libel-law-and-inferences-from-disclosed-facts/

    It protects many including newspapers and journal articles from having inferences be deemed ‘libel’.

  15. Now that I think about it, I’m not sure it is right to say Frank Ackerman claimed there was a bug in the code. The issue Ackerman highlighted was that the model would divide by zero (or at least, numbers very close to zero). I don’t see why that should be called a bug. It seems to me it could just be called a modeling error. As far as I can see, the “bug” isn’t a bug. It’s just an effect of programming the equation used in the model.

    The equation can be seen here. It allows for division by zero and near-zero numbers. I don’t see how a correct implementation of it qualifies as a bug.

    (I forgot how to embed images.)

  16. lucia:

    Another interpretation involves replacing “should” with “would”. That is: the journal would fire him if he admitted something that resulted in a judgement against the journal.

    That is actually what I had in mind. I guess I should have said “should be expected to be fired” instead of “should be fired.” I just didn’t think to make the distinction between describing how the organization would act and endorsing how it would act.

    I didn’t say it follows from what you said.

    Gotcha.

    Ok. I concede that criticism of methodologies can amount to statements of fact and this is one. But I would say this one could be libel if it’s untrue. I don’t have any gripe with people suing for libel if a statement of fact is false and defamatory.

    I do. Even if it were actionable, I don’t think suing a person over a statement like that made in the normal process of scientific discourse is appropriate.

    I guess the reason criticism journal articles generally don’t — and infact can’t — result in libel claims is that they are usually surrounded by statements of fact that make a statement a “conclusion based on facts”.

    It seems to me that would lead to impossible standards regarding specificity in scientific disputes, but I won’t argue you’re wrong. Similarly, I still don’t see why saying someone made a mistake should be considered defamatory, but I won’t argue it isn’t.

    Whether or not it could be done, I think threatening to sue simply because someone disagrees with you on a scientific issue is inexcusable.

  17. Brandon,
    If you are going to be literal, Tol didn’t threaten to sue. Characterizing something as libel is not the same as threatening to sue over it. We all went over this on the discussions about people requesting corrections in the Lewandowsky. I can’t claim I remember your position on whether people mentioning they thought Lew’s statements were actionable were the same as actual threats to sue, but I think they aren’t.

  18. Brandon:

    Similarly, I still don’t see why saying someone made a mistake should be considered defamatory, but I won’t argue it isn’t.

    I suppose it comes down to the claim that Ackerman published something that he knew, or should have known, was false. I had never heard of FUND before today, but hypothetically, suppose it were commercial software (in the sense that licensing fees are collected for its use). Then knowingly making false statements about defects in the program could unfairly affect the marketability of that program.

    By the way, I noticed that Ackerman has a link to his paper:

    http://frankackerman.com/publications/climatechange/Ackerman_Munitz_Ecological_Economics_2012.pdf

    It’s too bad that most of the rest of this is paywalled. I don’t have access through my University to that journal, nor do I feel like paying their exorbitant fees for what amounts to short comments.

    (I really don’t understand the policy of not moving editor’s comments from behind paywall. What gives with with that?)

    By the way, regarding bug or misfeature, here is how I view it:

    If the purpose of a program is to faithfully reproduce an algorithm, then it would be a bug for it to not divide by zero if the original algorithm also did so. If however, the purpose of the program is to solve a particular modeling problem, then using an algorithm that has a divide by zero would be a software bug (inappropriate model for that domain).

    I can’t see Stern’s comment, nor will I pay to see it, so it’s hard for me to say much more on that.

  19. Lucia:

    If you are going to be literal, Tol didn’t threaten to sue.

    I didn’t read it that way either.

    Given that Brandon will complain rather strongly about other people misinterpreting him (e.g., Mark Steyn), I’m surprised he wasn’t more careful to analyze what Tol actually said.

  20. lucia, Carrick, Richard Tol didn’t just characterize things as libel (which is not threatening anything). As I quoted in the post, he specifically said:

    For the moment, I will put aside the option of a civil case for infringement of copyright and defamation. I will seek arbitration, however.

    He specifically brought up the issue of a lawsuit, saying he’d put it aside for the moment. That is notably different than:

    Characterizing something as libel is not the same as threatening to sue over it.

    If we’re going to be literal, Tol indicated he had considered filing a lawsuit, but he was going to refrain, temporarily, while he pursued another avenue of redress. The apparent reason for bringing up the idea of a lawsuit was to try to pressure the other party into acceding to his demands. That’s using the threat of a lawsuit to intimidate them.

    I’m open to alternative interpretations if you guys have them, but those interpretations can’t sensibly ignore the fact Tol brought up the possibility of him filing a lawsuit.

    I suppose it comes down to the claim that Ackerman published something that he knew, or should have known, was false. I had never heard of FUND before today, but hypothetically, suppose it were commercial software (in the sense that licensing fees are collected for its use). Then knowingly making false statements about defects in the program could unfairly affect the marketability of that program.

    That probably wouldn’t rise above product disparagement, but even if it did become defamation, it would be defamation of the FUND model, not Richard Tol.

  21. Brandon, I will concede that Tol used legal pressure in the emails/physical mail between himself and SEI to try and resolve this dispute.

    However, the resolution he was seeking seems to have been arbitration and not capitulation. That seems like an entirely reasonable demand under the circumstances:

    Ackerman has published multiple pieces on FUND (as I have now learned), and Ackerman modified the model (as I understand it he replaced Tol’s model with a modification of it that doesn’t have the divide by zero problem) and the code (he also removed post-processing that Tol claims “fixed the problem”), and it was with this model and software change that he saw large differences when he tested the model using a Monte Carlo approach.

    It’s possible that Ackerman is correct about the errors, but I’d like to see Stern’s comment before parsing any more of this.

    That probably wouldn’t rise above product disparagement, but even if it did become defamation, it would be defamation of the FUND model, not Richard Tol.

    Is that a distinction that actually has any significance, assuming Tol owns the software/model?

    If your research is built around a particular work, which people are now erroneously claiming is defective, does this not cast you in a bad (and false) light?

  22. By the way, learning more about an economic model than I ever wanted to (oh blah!), his source code is here:

    http://www.fund-model.org/source-code

    Ackerman seems to be pretty open about the issues. All of the papers in question are linked to from here:

    http://frankackerman.com/tol-controversy/

    In particular, the comments of the editor are here:

    http://frankackerman.com/Tol/EcolEcon_editor_statement.pdf

    This is probably the comment by Stern that Tol is commenting about:

    Based on the responses I received and the previous correspondence, I determined that some statements in the paper were problematic and that Ackerman and Munitz did not report in their paper the information they had received from the model developers about the division by zero issue.

  23. Lucia, making a false statement is not enough for libel. They have to show that the author knew it to be false. There might even be a requirement of malicious intent. Now Stern’s statement could maybe interpreted as the authors knew it to be false. If that is a proper interpretation, then Tol’s statement that Stern said Ackerman committed libel doesn’t strike me as outrageous. I don’t see any evidence that Ackerman acknowledged he knew of the error, though Stern says he had a back and forth with Ackerman about it.

  24. This comment by Nordhaus in response to a criticism by Tol is interesting too:

    http://frankackerman.com/Tol/Nordhaus_comment_on_Tol.pdf

    The ratio of two normal distributions with non-zero means is a non-central Cauchy distribution. A non-central Cauchy distribution has a standard Cauchy term and another complicated term, but we can focus on the Cauchy term. This distribution is “fat tailed” and has both infinite mean and infinite variance. So the level damage from agriculture in FUND 3.5 (from a statistical point of view) will dominate both the mean and dispersion of the estimated damages. Taken literally, the expected value of damages to agriculture are infinite at every temperature increase. This is subject to sampling error in finite samples of any size, but the sampling error is infinite since the moments do not exist, so any numerical calculations with finite samples are (infinitely) inaccurate. There is also a coding issue because it is not possible to get an accurate estimate of the distribution of a variable with infinite mean and variance in finite samples. The most troubling impact of this specification is the estimate of the distribution of outcomes (such as the social cost of carbon or SCC). If the damages are a fat tailed distribution, then the SCC is also fat- tailed. In finite samples, of course, all the moments are finite, but the estimates are unreliable or fragile and depend upon the sample.
    Tol indicates that they do a check of the outcomes both by inspection and by trimming the extremes. As an analytical matter, a trimmed distribution is even more complicated than the Cauchy, but it still will have an infinite mean and variance.
    I assume that this strange distribution was not intended, and in any case is easily corrected. My point was not to dwell on the shortcomings of our models. Rather, we need to recognize that most economists and environmental scientists are amateurs at software design and architecture. As computers get faster, as software packages get more capable, as our theories get more elaborate – there is a tendency to develop models that increase in parallel with the rapidly expanding frontier of computational abilities. This leads to increasingly large and complex models. We need also to ask, do we fully understand the implication of our assumptions? Is disaggregation really helping or hurting?6

    Note in particular: This distribution is “fat tailed” and has both infinite mean and infinite variance. So the level damage from agriculture in FUND 3.5 (from a statistical point of view) will dominate both the mean and dispersion of the estimated damages.

    Also notice: In finite samples, of course, all the moments are finite, but the estimates are unreliable or fragile and depend upon the sample.

    This seems to be a substantive issue to me.

  25. MikeN,
    I’m not going to pretend I know a lot about defamation law. But this page doesn’t say they had to know it was wrong.

    http://www.law.cornell.edu/wex/defamation

    To establish a prima facie case of defamation, four elements are generally required: a false statement purporting to be fact concerning another person or entity; publication or communication of that statement to a third person; fault on the part of the person making the statement amounting to intent or at least negligence; and some harm caused to the person or entity who is the subject of the statement.

    “fault on the part of the person making the statement amounting to intent or at least negligence” is less than knowing it was false. Negligence is enough– and in a research article, false claims about problems while purporting to have ‘researched’ the issue might be seen as negligent. I don’t know if this is so– but it seems to me the standard for identifying negligence ought to be raised if the forum claimed the statement is based on research.

  26. Carrick,

    This seems to be a substantive issue to me.

    I think it is a substantive issue. But I also think it is one where Ackerman may be taking a bunch of facts that are true (a few that are confused) and drawing an incorrect conclusion about the effect on estimating uncertainty intervals. Remember: the standard deviation is not an uncertainty interval. It can be used as a proxy for them — generally that works if the distribution is Normal, or at least not fat tailed. But I’m pretty sure (though tentatively) that the standard deviation may be infinite and one can estimate 95%, 99% or pretty much any confidence interval one likes just as accurately as if the standard deviation is finite.

    You’ve pretty much quoted the bit of Ackerman that I strongly suspect is probably… wrong or confused at least with regard to final inferences about FUND or anyone’s ability to rely on FUND’s computation of uncertainty intervals.

  27. lucia, yes that is Nordhaus’s criticism. I agree that there some issues with it: I suspect, mostly because they don’t like the implications from the model, people are making a big deal about poles (“divide by zeros”) that sit on the real axis, but poles that lie near the real axis (for complex values of the parameter space) can be every much as big of a problem.

    I could imagine examples where the pole on the real-axis was sufficiently far away from the physically realizable domain space of the parameters so that he pole wouldn’t have any undue influence on the interpretation of the results, but seemingly well behaved models with complex-valued poles near the real axis (and much closer in Euclidean distance sense to the physically realizable domain space) where the issues raised by Nordhaus still exhibit themselves.

    Poles that reside close to the physically realizable domain space will dominate the results of the Monte Carlo. [*] If those poles are in the correct location (I think nobody disputes that Topt=1.6, the location of the pole in FUND 3.5, is a poor approximation), then this behavior is real. Otherwise it is an artifact.

    Ackerman appears to concede that he is performing Monte Carlo experiments that lie outside of the domain space considered by Anthoff and Tol. So at least part of the criticism of Ackerman does see valid. I’m also not sure how much of the rest of the concern is real, versus manufactured: It’s pretty obvious that to qualify as an alarmist you must necessarily dismiss more optimistic outcomes.

    [*] This recognition leads to the “all poles approximation” used in engineering.

  28. Here’s a comment from Ackerman well before the 2012 paper.

    http://triplecrisis.com/for-whom-the-blog-tols/

    The issues with the divide by zero were discussed in comments.

    I can see why Tol thinks that he’s never been given permission to publish Ackerman’s emails.

    He asks:

    Mr Ackerman: In your email of March 17, 2011, 9:29 PM, you mention that you made more extensive changes to the model. Do I have your permission to release that email?

    There is currently no response published on that blog neigh 3 1/2 years later.

  29. Carrick:

    Is that a distinction that actually has any significance, assuming Tol owns the software/model?

    If your research is built around a particular work, which people are now erroneously claiming is defective, does this not cast you in a bad (and false) light?

    I think the distinction is quite important. There may be cases where a person’s reputation is so tied to a product that defaming the product would defame him (though I’m not convinced there are), but there are certainly tons of situations where that isn’t the case.

    Suppose a person publishes a hundred papers with a hundred different sets of code. Would I defame him by saying one set of code was flawed? I don’t think so. Even if I did somehow defame his code, I don’t see why that’d automatically mean I defame him. I don’t think I automatically defame Michael Mann when I say his 2008 CPS reconstruction is worthless because it depends upon the screening fallacy (amongst other things).

    I think the distinction between a person and a person’s work is important because I don’t see why criticizing one must be defaming the other. That’s especially true when more than one person is responsible for the work in question.

    Ackerman seems to be pretty open about the issues. All of the papers in question are linked to from here:

    I did link to that at the end of my post. (There should be an emoticon here to show I’m saying this in good humor, but those things are creepy.)

    Brandon, I will concede that Tol used legal pressure in the emails/physical mail between himself and SEI to try and resolve this dispute.

    However, the resolution he was seeking seems to have been arbitration and not capitulation.

    Suppose Michael Mann had demanded Mark Steyn go through an arbitration process or he’d sue. I doubt anyone would argue that wasn’t intimidation. It’s still requiring the “defendant” go through legal hurdles because he criticized the “plaintiff” (or their work).

    Imagine if John Cook made the same sort of remarks and demands of Richard Tol in response to Tol’s criticisms of the Skeptical Science consensus paper. Would that be an appropriate response? I don’t think so, and I think Tol would throw a fit.

  30. Carrick, the link in your last post amused me when I read it because of something Richard Tol said in it:

    The division by zero is a non-issue. This is done in a piece of model code that was never used by us, and therefore never properly tested.

    That’s a claim Tol seems to have rarely made and quickly dropped. As far as I can tell, it isn’t remotely close to true. The problem arises from an essential module in FUND’s calculations of the impacts of global warming on agriculture, one of the most important sectors FUND covers. I have no idea how it could be “a piece of model code that was never used” when its source is clearly listed as one of the key equations for the agriculture sector.

    But I hadn’t intended to get into technical details with this post. Even if I accepted everything Tol says about Frank Ackerman’s criticisms, I would still find his behavior inexcusable. I don’t think it’s appropriate to do things like write to a person’s new employer to say the employee has broken the law. The only purpose I can see for behavior like that is to try to get them fired. That’s not how people should handle disagreements.

    Plus, the article which triggered all this didn’t even discuss those technical issues.

  31. Brandon, I missed your link because the font shows up as about 2.5-mm in height on my monitor (it’s much smaller than the surrounding text so I just missed it). I agree the emoticon is creepy. Super creepy.

    in Mann’s case, going through (binding) arbitration would have been much healthier for all involved, and potentially much less damaging to free speech than the current “nobody will win” scenario we have. Personally, I think Steyn should have apologized, especially for the comparison with a sex offender.

    I didn’t realize how long lived the dispute between Ackerman and Tol was until I started googling it. Politically, crafting the “request” for arbitration in the manner he did, was not a successful strategy for Tol. So yeah, it was unwise and otherwise a failed idea.

    I can’t imagine SkS acceding to arbitration (esp. open arbitration). Groups that run super-secret forums aren’t going to be very interested in transparency.

    One thing we would have learned then is that Cook failed to store all of the necessary metadata needed to validate his study. Tol wouldn’t have to waste time trying to figure out how to get Cook to release data that Cook didn’t have.

    As to whatever else Cook did in relationship to TCP, well, I can’t find much praise worthy there. He raised my hackles, but for a different reason.

  32. Brandon:

    That’s a claim Tol seems to have rarely made and quickly dropped. As far as I can tell, it isn’t remotely close to true. The problem arises from an essential module in FUND’s calculations of the impacts of global warming on agriculture, one of the most important sectors FUND covers. I have no idea how it could be “a piece of model code that was never used” when its source is clearly listed as one of the key equations for the agriculture sector.

    The “piece of model code that was never used by us” I believe refers to the modifications made by Ackerman to Tol’s code.

    Ackerman actually admits in his publication there is not a problem with respect to the use of the code by Tol:

    This is not a problem in FUND’s best-guess mode; the re- gional values of Topt are never equal to 1.6. The closest is 1.51, and most are much farther away. In Monte Carlo mode, however, Topt is a normally distributed variable; the critical value of 1.6 is within 0.25 standard deviations of the mean for every region.

    It’s an issue for the Monte Carlo method in that it affects the uncertainty bounds. This is discussed by Ackerman, but IMO with a strong flavor of “no positive outcomes” bias.

  33. It would be nice to see the distribution of regional values of Topt for FUND. But this statement The closest is 1.51, and most are much farther away does rather smack of the real-axis pole being a non-problem.

    I wonder if Ackerman would release his original submission: I’d like to see if that statement go added before or after the reviewer comments.

    I should note that DICE (a competing economic model) was written by Nordhaus.

    http://www.econ.yale.edu/~nordhaus/homepage/documents/DICE_Manual_103113r2.pdf

    Possibly that explains the source of some of the Nordhaus’s criticism of this model. It’s not random or unbiased that he would be critical of an alternative package to his own.

  34. Carrick, something I find interesting is the smaller font size is the same size as I use for quotes. A few people didn’t believe that at first. Apparently the quotes being in italics makes them stand out quite a bit more (at least to some people).

    in Mann’s case, going through (binding) arbitration would have been much healthier for all involved, and potentially much less damaging to free speech than the current “nobody will win” scenario we have.

    I agree, but being better than “possibly the worst solution imaginable” doesn’t mean much. (Insert creepy emoticon here.)

    One thing we would have learned then is that Cook failed to store all of the necessary metadata needed to validate his study. Tol wouldn’t have to waste time trying to figure out how to get Cook to release data that Cook didn’t have.

    What metadata do you think he didn’t store? I hadn’t heard that before. All I’ve heard anyone say is it wasn’t released.

    The “piece of model code that was never used by us” I believe refers to the modifications made by Ackerman to Tol’s code.

    Ackerman actually admits in his publication there is not a problem with respect to the use of the code by Tol:

    I think you’ve misunderstood something. The Monte Carlo wasn’t added by Frank Ackerman. It is part of the FUND model. I’ve seen references to it in several papers Richard Tol co-authored, each directing the reader to the FUND website to see details about it. I can offer links if necessary (the trick is just finding non-paywalled papers).

  35. Carrick,
    The divide by zero error is discussed in Ackerman’s paper thusly:

    4.1. Risk of division by zero
    The manner in which the optimum temperature effect is modeled
    in FUND 3.5 could cause division by zero for a plausible value of a
    Monte Carlo parameter. The equation for the optimum temperature
    impact, modeled as a percentage change in agricultural output, is (in
    slightly simplified notation):
    Impact ¼
    −2ATopt
    10:24−6:4Topt T þ A
    10:24−6:4Topt T2 ð2Þ
    This is calculated for each time period and region. T is the average
    change in temperature, a global variable, and Topt is the optimum
    temperature for agriculture. Both A and Topt are Monte Carlo parameters,
    specified separately for each region.
    In Eq. (2), the denominators of both fractions would be zero if
    Topt=1.6. This is not a problem in FUND’s best-guess mode; the regional
    values of Topt are never equal to 1.6. The closest is 1.51, and
    most are much farther away. In Monte Carlo mode, however, Topt is
    a normally distributed variable; the critical value of 1.6 is within
    0.25 standard deviations of the mean for every region. This implies
    that it will be reasonably common to draw a value very close to 1.6,
    making the denominator very small and the impact very big. In
    such cases, the magnitude of the impact will depend primarily on
    how close to 1.6 the value of Topt turns out to be. Ironically, this problem
    could become more severe as the number of Monte Carlo iterations
    rises, since the likelihood of coming dangerously close to the
    critical value steadily increases. (In the Working Group analysis,
    there are 10,000 iterations, each involving selection of 16 values of
    Topt, one for each region.)
    The problem is generic to formulations such as (2). If X is a nonnegative
    random variable with a probability density function f that
    is positive at zero (i.e., f(0)>0), then Y=1/X has a “fat tailed” probability
    of arbitrarily large values: for sufficiently large r, the probability
    p(Y>r)=p(Xb1/r)≈(1/r)f(0). In formal mathematical terms, Y
    is regularly varying with tail index 1; that is, the tail of Y is decreasing
    at a polynomial rate of degree −1. Whether the mean of Y exists depends
    on the distribution of X, but in any case, the expected value
    E(Ya) is infinite for a>1. In particular, the variance of Y is infinite.
    The same problem arises, of course, for the function Y=1/(X–c),
    if there is a positive probability of the value X=c. Consider a numerical
    example, where X has a standard normal distribution (mean 0,
    standard deviation 1), and c=0.25. Using Excel, we drew repeated
    values of X, and calculated Y=1/(X−0.25). The standard deviation
    of Y, for sample sizes up to 40,000, is shown in Fig. 5. The standard deviation
    of Y quickly becomes orders of magnitude greater than the
    standard deviation of X, and continues to grow. We discontinued
    our numerical simulation when, after about 42,000 iterations, the
    Excel random number generator drew a value of X=0.24999902,
    leading to Y greater than 1,000,000 in absolute value, and increasing
    the standard deviation of Y by another order of magnitude. That is exactly
    the problem: the larger the sample, the greater the danger of
    drawing values of X so close to c that Y becomes meaninglessly
    large (in absolute value).
    Both coefficients in Eq. (2) have structures comparable to Y in this
    example (after linear transformation of variables): the denominator
    is a normally distributed random variable, minus a constant that is
    within 0.25 standard deviation of the mean of the random variable.
    Thus the variance of each coefficient will increase without limit as
    the number of Monte Carlo iterations increases, and (2) will provide
    an increasingly unreliable estimate of agricultural impacts.

    My tentative view it’s legitimate (in fact necessary) to ask: Is equation (2) (which doesn’t display well above) appropriate? Unfortunately, Ackerman seems to just want to discuss whether small values can appear in the denominator. But when investigating the answer to whether (2) is high, I don’t think you can say it is by merely observing according to the equation, the absolute value of Impact is very high. Maybe that’s what can really happen. (Things like this happen in physical systems. Denominators can get small. Then response to numericists who might have troubles dealing with that in Monte Carlo is: write your code to deal with that.)

    But Ackerman doesn’t seem to be focuses merely on Impact being very high– after all, it will become very high when T gets large in his substitute equation . He focuses on the variance of Impact being infinite in the montecarlo simulation and intimates this is somehow “wrong”. The difficulty is that if (2) in FUND is appropriate, then the variance of Impact is infinite and a model for Impact should return infinite (nor near infinite) variance. So, what’s happening is that as Ackerman adds more samples to his montecarlo model he is getting closer to the right answer, not the wrong one. So, that’s confused.

    But the next issue is this: Ackerman concludes “(2) will provide
    an increasingly unreliable estimate of agricultural impacts.”

    But this is wrong–or if not wrong, at least mudded. Although it is true that we often think of the variance as a measure of the uncertainty, it’s really just a substitute. It’s a decent one when the pdf is not fat tailed! It’s especially convenient when the pdf looks like a Normal distribution which permits us to find 95% confidence intervals by looking up published distrbution values and say things like the 95% confidence intervals is at 1.96*standard deviations.

    But when the pdf is fat tailed with infinite variance, you can still get The fact that the variance in Y is infinite you don’t compute the uncertainties that way. You just find the shape of the pdf and get the cutoffs. The fact that the variance of Y is infinite doesn’t prevent one from finding the value of Y such that p(y<Y)=97.5%. It doesn't interfere with finding any confindence limit you want. You can find them– and there is little difficulty in doing so.

    So, the issue here is the hypothetical ‘divide by zero’ in the denominator (which if itreally occurred would crash a program and so be noticed) is being used to create an argument about getting ‘unreliable estimates’ of agricultural impacts. And that conclusion is wrong. The divice by zero potential in the denominator does not cause any such “problem”. As long as the mean exists, and you can compute the shape of the pdf, you can find the mean reliably and find uncertainty intervals.

    There is a “divide by zero” potential in the model equation, but it presents no “problem” in FUND results or in interpreting them. Or at least, if it presents a problem, Ackerman’s explanation doesn’t identify that problem.

    As for the issue of Ackerman’s use of equation (3) he later proposed as a substitute getting different values: perhaps. But all that means is if equation (3) is more realistic than (2) on their face, the (3) should be used. But this has nothing to do with any “divide by zero” “problem”.

  36. Re. copyright infringement:
    This case is shut. Anthoff and I wrote the code. Ackerman and Munitz downloaded our code, changed it, and presented the new code as ours. They later acknowledged that they never should have done that.

  37. Nordhaus is an interesting story. He was giving a keynote address and wanted to warn the audience about numerical issues in large models. Instead of selecting the example of time-traveling carbon in his own DICE model, he parroted Ackerman on FUND. Nordhaus backtracked and apologized, and then wrote the piece copied above which rightly focuses on the stochastic properties but unfortunately overlooks tail correlation.

  38. http://www.law.cornell.edu/wex/libel

    recovery of presumed or punitive damages is not permitted without a showing of malice; that is, unless liability is based on a showing of knowledge of falsity or reckless disregard for the truth.

    In Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc., the Court held that in actions for libel involving private individuals and matters of purely private concern, presumed and punitive damages may be awarded on a lesser showing than actual malice.

  39. lucia, I don’t think this is a fair response:

    Maybe that’s what can really happen. (Things like this happen in physical systems. Denominators can get small. Then response to numericists who might have troubles dealing with that in Monte Carlo is: write your code to deal with that.)

    But Ackerman doesn’t seem to be focuses merely on Impact being very high– after all, it will become very high when T gets large in his substitute equation . He focuses on the variance of Impact being infinite in the montecarlo simulation and intimates this is somehow “wrong”. The difficulty is that if (2) in FUND is appropriate, then the variance of Impact is infinite and a model for Impact should return infinite (nor near infinite) variance. So, what’s happening is that as Ackerman adds more samples to his montecarlo model he is getting closer to the right answer, not the wrong one. So, that’s confused.

    A few paragraphs after your quote ends, Frank Ackerman says:

    A fix for the optimum temperature equation bug is planned for the next version of FUND

    Citing one of the authors of the FUND model. I don’t think it’s fair to criticize Ackerman (or his writing) as confused for disregarding the possibility this could be a legitimate equation when the authors of the model had already agreed it wasn’t.

    Similarly:

    But this is wrong–or if not wrong, at least mudded. Although it is true that we often think of the variance as a measure of the uncertainty, it’s really just a substitute. It’s a decent one when the pdf is not fat tailed! It’s especially convenient when the pdf looks like a Normal distribution which permits us to find 95% confidence intervals by looking up published distrbution values and say things like the 95% confidence intervals is at 1.96*standard deviations.

    But when the pdf is fat tailed with infinite variance, you can still get The fact that the variance in Y is infinite you don’t compute the uncertainties that way.

    The last sentence here is muddled, I believe due to an editing error, but the more important issue is you dismiss Ackerman’s claimed issue by saying there are ways of avoiding the issue. You are right that it’s possible, but that doesn’t mean it was done here. The fact results from the FUND model could avoid this problem doesn’t mean they do. If the Monte Carlo results were used to calculate uncertainty levels, and if those calculations used variance to do so, Ackerman’s criticism is correct.

    It’s seem silly to call a person’s conclusions “wrong” because they don’t address a methodology that wasn’t used. I can see saying it is poorly worded, lacks the right caveats, is confusing/misleading or any number of other things. I just don’t see how we can call Ackerman wrong for not addressing the possibility FUND did something it didn’t do. Or how we can criticize him for not considering the possibility a bug wasn’t a bug even though the creators of the model said it was a bug.

  40. By the way, Richard Tol’s comment about copyright infringement is hilarious:

    Re. copyright infringement:
    This case is shut. Anthoff and I wrote the code. Ackerman and Munitz downloaded our code, changed it, and presented the new code as ours. They later acknowledged that they never should have done that.

    Downloading and changing the FUND code is unquestionably okay. That was the purpose of sharing the code. That means the only unintended aspect of this is Frank Ackerman (supposedly) “presented the new code” as Richard Tol’s. In other words, they infringed upon Tol’s copyright by assigning him credit for intellectual property.

    Be careful everybody. If you ever modify somebody else’s code, make sure you take credit for all of your changes. If you don’t, you might be threatened with a copyright lawsuit!

    (And yes, I mean threatened. I don’t just mean somebody will say you infringed upon their copyright. I might still be a little annoyed about this.)

  41. Brandon

    Citing one of the authors of the FUND model. I don’t think it’s fair to criticize Ackerman (or his writing) as confused for disregarding the possibility this could be a legitimate equation when the authors of the model had already agreed it wasn’t.

    Why not? The evidence of confusion exists even if the equation is illegitimate!

    but the more important issue is you dismiss Ackerman’s claimed issue by saying there are ways of avoiding the issue.

    No. I am dismissing the specific issue Ackerman claims exists by saying it’s not an issue at all. There is no need to avoid the non-issue. Since there is no need to ‘avoid’ the non-issue, there will be no evidence that anything was done to avoid it.

    if those calculations used variance to do so, Ackerman’s criticism is correct.

    That’s a big “if”. Note that Ackerman does not say the uncertainty estimates in FUND are or were computed based on the variance. He just says an infinite variance exists and then.. [a miracle occurs]. … jump to conclusion about uncertainty estimates saying absolutely nothing about how variances relate to uncertainty intervals.

    One of the advantages of Monte Carlo is one generally need not use variance to compute them and more over, it’s easier to just obtain the directly from the Monte Carlo. There’s no reason to just assume they aren’t done the easiest most natural way in FUND– particularly since Ackerman says nothing about this step in his criticism.

    t’s seem silly to call a person’s conclusions “wrong” because they don’t address a methodology that wasn’t used.

    I don’t know what this is in regard to. What methodology do you think they (Ackerman?) don’t address that wasn’t used by whom (Tol?) My contention is when criticizing Tol, Ackerman doesn’t even fully expalin the methodology Tol used. Specifically: he leaves a magic step between the ‘discovery’ that the model (correctly) determine variances are infinite and the conclusion this somehow leading to incorrect uncertainty intervals. But infinite variances don’t necessarily lead to incorrect uncertainty intervals and Ackerman doesn’t explain why they might do so in Tol’s methodology.

  42. Brandon, thanks for the clarification.

    By meta data, I was referring for example the time-stamp information and the information on which , or to the order in which abstracts were presented (including those that weren’t rated). (Depending on what you’re doing with it, this is either data or metadata.)

    Lucia:

    My contention is when criticizing Tol, Ackerman doesn’t even fully expalin the methodology Tol used.

    Nor does he adequately explain his modifications to the methodology so that people can delineate between his undocumented changes in the the code and what was originally done.

    Richard:

    Instead of selecting the example of time-traveling carbon in his own DICE model, he parroted Ackerman on FUND.

    Did he parrot Ackerman, or spoon feed him? I was guessing the latter, but I’ve gotten a bit cynical in my old age.

  43. lucia:

    Why not? The evidence of confusion exists even if the equation is illegitimate!

    I don’t see any. The only evidence of confusion you’ve offered on the point is his writing operated under the assumption the equation did not work as intended. He wrote that after speaking to the authors and confirming it wasn’t working as intended.

    You say, “The difficulty is that if (2) in FUND is appropriate,” but until today, nobody had suggested the equation was appropriate. Everyone agreed it wasn’t. Why do you think it is confused to take as granted a point everyone agrees upon? (Serious question.)

    That’s a big “if”. Note that Ackerman does not say the uncertainty estimates in FUND are or were computed based on the variance. He just says an infinite variance exists and then.. [a miracle occurs]. … jump to conclusion about uncertainty estimates saying absolutely nothing about how variances relate to uncertainty intervals.

    He may have assumed people familiar with the subject already knew, and anyone who wasn’t would assume it was stipulated by all parties. That would explain why Richard Tol never said a word about the issue. If he didn’t use standard deviations to measure uncertainty, people could reasonably have expected for him to say so. Instead, we can find papers by Tol that did things like display figure after figure examining standard deviations and saying:

    Figure 3 shows the standard deviation of climate change impacts, normalised with GDP, as a function of sample size. For sample sizes up to 1000, there may be a small upward trend in the standard deviation. However, it appears that it just takes a lot of observations to estimate the standard deviation with some reliability. For sample sizes between 8000 and 10,000, the standard deviation is constant.

    The conclusion of this section is not surprising. Although the uncertainties in FUND are large, they are finite. That is because the model was constructed that way.

    Given Tol explicitly equates his model uncertainties with standard deviations, I think it is reasonable for Ackerman to discuss issues with standard deviations under that representation. Interestingly, that same section also dismisses the idea of infinite variance in the FUND model, undercutting the previously discussed equation (2).

    The points you’ve raised against Ackerman’s claims all revolve around him not explaining things nobody disputed. I get that can cause problems for clarity, but if everyone involved in a disagreement agrees on a point, I don’t think a person should receive much criticism for not delving into that point.

  44. Carrick, we don’t have anything to show timestamps weren’t kept. It’s true only datestamps were included in the data file I came across, but there was more data in the database than was output to the data files.

    You’re right about the abstract order though. They didn’t keep track of who was presented what abstracts. Timestamps (if they exist) would help a bit because you could see which sets of ratings were submitted together, with an order corresponding to the order seen by the rater. It wouldn’t tell us anything about the abstracts left unrated though.

  45. Brandon:

    He may have assumed people familiar with the subject already knew, and anyone who wasn’t would assume it was stipulated by all parties

    That’s not the guidance that you’re given when writing papers for publication. You are to spell out the assumptions so that any intelligent but not necessarily knowledgable reader can follow the gist of the paper.

    So even if this was the reason key details were omitted,, it still makes it a poorly written paper.

  46. Carrick, to be fair, Frank Ackerman did say what his uncertainty measurement was, cite the FUND model and use the same measurement as used for the FUND model. He wasn’t as clear as he could have been (and I suspect he may not have even considered this issue), but it is a relatively minor error. It would have only taken the addition of a single sentence to fix things.

  47. Brandon: Presenting your own work as somebody else’s is indeed an infringement of copyright in most jurisdictions, although some recognize the separate trespass of defacement. This law stops me from writing a book full of nonsense and publish it under the name Brandon Shollenberger so that you would be ridiculed.

  48. Richard, I don’t see how “[p]resenting your own work as somebody else’s” is a copyright infringement.

    Possibly defamation of character, misrepresentation (assuming you didn’t self publish) and outright fraud (if you stood to materially gain from the misrepresentation).

    If I took a work of yours, modified it, and claimed it were mine, that would be copyright infringement.

    If I took a program of yours, modified it, and distributed as an unmodified version of your program…that could be a violation of the licensing agreement (that’s a breaking of contract so it falls in a different area).

    People violating GPL-protected open-source code is the closest I could come up with.

  49. Carrick, it isn’t a copyright infringement. The nature of copyright is to protect people’s rights to their own work. It cannot protect them from work owned by other people.

    That’s why I scoffed at it from the beginning. Tol’s position isn’t just wrong; it’s contrary to the nature of the laws in question.

  50. Wow, I was out of town and off-line last week and had no idea this debate was going on.

    I have never “admitted” libeling anyone, because I have not libeled anyone. I have also not had to say that gremlins invaded my article so the wrong data was published, or to correct my data and then correct the correction, unlike some participants in this debate (see http://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.28.2.221).

    I have not infringed on anyone’s copyright – we downloaded the FUND software from a website that made it available for downloading, and indeed encouraged other researchers to download and use it. With the help of Tol’s coauthor David Anthoff (whom we acknowledged in our article), we ran FUND, reproducing the published results from it.

    We then made a series of very specific changes in the model, described in our article, to test its sensitivity to some small, simple changes in structure. Two of those changes were ad hoc, one-line code changes to replace the line of code that contained the potential divide-by-zero risk. We acknowledged Anthoff’s statement that those changes were not the ideal way to revise the model; we intended them simply as sensitivity analyses, studying the sensitivity of the model results to that specific line of code. As we reported, the model seemed highly sensitive to changes in that single line of code.

    The debate has, perhaps unexpectedly, left me feeling encouraged about the norms of academic discourse, which are shared by almost everyone involved. I would like to mention David Anthoff, someone with whom I have a number of important substantive differences; he has approached these differences in a reasonable manner, making suggestions that I have adopted about how to present my comments to avoid further confusion and dispute. The comments in our article, thanking him (and absolving him of responsibility for our findings) are well deserved. Needless to say, he has never accused me of libel or threatened to sue me. His coauthors could profitably learn from his example.

    More broadly, I have also been heartened by the support of the economics community for the legitimacy of our article about FUND, and the need to handle disagreements through normal scholarly debate and publication. William Nordhaus, Michael Hanemann, and dozens of other economists have signed a statement to that effect. Our article, always available for debate and disagreement of course, has taken its place in the peer-reviewed literature, beginning to be cited by others (including a discussion in the DICE 2013 Users Manual). Links to that and more on the debate can be found at http://frankackerman.com/tol-controversy/.

  51. Hey Frank Ackerman. Thanks for commenting. I considered trying to contact you to let you know I had written about you, but I figured you are probably tired of dealing with Richard Tol. Your comments about libel/copyright seem obviously true to me.

    I’m happy to hear this stuff hasn’t got you down. I got the impression from your writing you and David Anthoff had reasonable and useful exchanges, but I couldn’t tell how much of an outlier Tol’s behavior was. I’m happy to hear his abuses don’t represent a common view. This is especially true since he’s still at it, such as here:

    Which is expressed in more detail here. I know at least one other person has picked up on the same smear. I could understand how things like that could be disheartening.

    Anyway, the sort of sensitivity testing you did is something I wish we’d see more of. Whether or not a change is “right” isn’t that important. What’s important is understanding the effect of the change, and why the change happens. It’d be worth understanding why a large change in results happens with a small change in code, even if the original code was perfectly correct. The point of science is to learn, not just find the “right” answer.

    Out of curiosity, did you ever publish the code after you made changes? Leaving aside the points you guys disagree about, the changes to the code you made to be able to test the sensitivity of FUND’s results to different sectors seem like a useful addition to me. I think people using FUND should look at how the results depend on assumptions about agriculture, tourism or whatever.

    By the way, the link you provide is a great resource. I included it in my post because it was quite helpful to me. Maybe I should have given it more prominence than an offhand mention in the last sentence’s small font.

  52. Thanks for comments, Brandon. On the ongoing abuse of my name, see also my short comment on Tol’s own blog, posted a little while ago (http://richardtol.blogspot.com/2014/06/a-new-contribution-to-consensus-debate.html).

    Have you ever looked at the FUND code? Tens of thousands of lines of nearly undocumented C++ (my coauthor is fluent in that particular dialect). Our code is no secret, we’ll provide it to anyone who wants it. But we haven’t gotten requests for it so far. We have several versions, one for each of our various runs, each including tens of thousands of lines of code, differing from the original in only a single-digit number of locations – not everyone’s idea of a rollicking good read. But if you want it, send me your contact information and we’ll get it to you.

  53. Sadly, I don’t think youe comment will have any effect. Getting rid of juvenile name-calling seems impossible once it starts.

    On the code, nevermind. I looked at the FUND code when I tried to get it to run, and it’s already a pain. I can’t imagine using many versions of it. I had assumed you produced a single code base with switches you could turn on/off for testing purposes. That’s the sort of thing I’d build into a model if I were building one. It’s more work to set up, but it is helpful the long run.

    Anyway, if you created a different set of code for each parametedlr you wanted to remove (for testing), that’s easy to replicate. There’s no need to share code for it. I hate working with the C languagea, but even I can make a change that small :P

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s