An Obvious Lie, Made More Obvious

Sometimes you come across something and wonder how you ever missed it. Hingsight makes things seem so obvious sometimes. Today’s post is about an example of this.

A couple years ago, there was controversy over a highly publicized paper by Stephan Lewandowsky – NASA Faked the Moon Landing—Therefore, (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science – about skeptics being conspiracy theorists which claimed a survey had been linked to on eight blogs, one of which was Skeptical Science. Nobody could find any evidence Skeptical Science had linked to the survey (though its proprietor did post a link on Twitter). This was problematic because the supplementary material of Lewandowsky’s paper relied upon an (unpublished) analysis of Skeptical Science traffic to claim the survey was seen by a sizable number of skeptics, saying:

Prevalence of “skeptics” among blog visitors All of the blogs that carried the link to the survey broadly endorsed the scientific consensus on climate change (see Table S1). As evidenced by the comment streams, however, their readership was broad and encompassed a wide range of view on climate change. To illustrate, a content analysis of 1067 comments from unique visitors to http://www.skepticalscience.com, conducted by the proprietor of the blog, revealed that around 20% (N = 222) held clearly “skeptical” views, with the remainder (N = 845) endorsing the scientific consensus. At the time the research was conducted (September 2010), http://www.skepticalscience.com received 390,000 monthly visits. Extrapolating from the content analysis of the comments, this translates into up to 78,000 visits from “skeptics” at the time when the survey was open (although it cannot be ascertained how many of the visitors actually saw the link.)

For comparison, a survey of the U. S. public in June 2010 pegged the proportion of “skeptics” in the population at 18% (Leiserowitz, Maibach, Roser-Renouf, & Smith, 2011). Comparable surveys in other countries (e. g., Australia; Leviston & Walker, 2010) yielded similar estimates for the same time period. The proportion of “skeptics” who comment at http://www.skepticalscience.com is thus roughly commensurate with their proportion in the population at large.

If a link to the survey was never posted at Skeptical Science, this unpublished, unverifiable analysis of Skeptical Science visitors would be completely irrelevant. Without it, Lewandowsky and co-authors would have nothing on which to argue any meaningful number of skeptics had been exposed to their survey.

If skeptics hadn’t taken the survey, obviously, the survey couldn’t show skeptics believe in conspiracy theories. A survey requires skeptics claiming to believe in conspiracy theories in order to say skeptics believe in conspriacy theories. That wasn’t the case. Practically nobody taking the survey claimed to be skeptical of global warming and believe in conspiracy theories.

Instead, what happened is a lot of people who accept global warming said they don’t believe in conspiracy theories. Lewandowsky took this correlation between accepting global warming and not believing in conspiracy theories as proof people who don’t global warming do believe in conspiracy theories. That sort of “correlation” is meaningless, as I’ve shown before by coming up with graphs like this one:

Fig4

Which finds the exact same kind of fake “correlation” between being skeptical of global warming and believing in conspiracy theories as found in this graph:

Fig2

Which finds a “correlation” between believing global warming is a serious threat and thinking pedophilia is okay. Both of these “correlations” are fake, based entirely upon not asking many people of the group you’re interested in what they believe. Instead, you ask many people of any other group what they believe in and just assume the group you’re interested in believes the opposite.

This actually violates the mathematical assumption of the correlation testing the authors use. That testing requires a “normal” distribution to be valid. In other words, you data has to be a fair data set otherwise you get spurious results. If the authors didn’t get many skeptical responses (and a sizable number of people who believe in the various conspiracy theories), the correlation tests they used would be completely invalid.

The supplementary material for the paper would have us believe there were no data problems. We can check the data to see that’s false, but we can also examine the idea the survey was ever linked to at Skeptical Science. Others have shown that claim is almost unquestionably false by looking at archived versions of the Skeptical Science website (with the Wayback Machine). Those archived versions make it nearly impossible to believe a post was published on the site then deleted, as Stephan Lewandowsky has argued and John Cook (propietor of Skeptical Science) has specifically claimed:

Skeptical Science did link to the Lewandowsky survey back in 2011 but now when I search the archives for the link, it’s no longer there so the link must’ve been taken down once the survey was over.

This overwhelming evidence, combined with ongoing correspondence with Cook and Lewandowsky, led people to accusing them of intentionally lying about this issue. It’s a pretty compelling case.

It can be made more compelling though. Skeptical Science assigns every post on its website a number. That number goes up by one each time a new post is made. To find old posts, you can use a URL like:

http://skepticalscience.com/news.php?n=330

Which will take you to a post titled, “Arctic Sea Ice: Why Do Skeptics Think in Only Two Dimensions?” That post was written on August 23, 2010. Change the number 330 to 331, and you get taken to a post titled, “Station drop-off: How many thermometers do you need to take a temperature?” That one was written on August 24, 2010.

According to Stephan Lewandowsky and John Cook, Skeptical Science posted a link to the survey on August 28, 2010. The Wayback Machine provides a convenient list of posts published in that time. That earliest post on that list is, “Station drop-off: How many thermometers do you need to take a temperature?” That was post 331. Here are the posts uploaded after it:

332: “Arctic sea ice… take 2” – August 25, 2010.
333: “Climate Models: Learning From History Rather Than Repeating It” – August 26, 2010
334: “Can humans affect global climate?” – August 26, 2010
335: “Comparing volcanic CO2 to human CO2” – August 27, 2010
336: “Ocean acidification threatens entire marine food chains” – August 28, 2010
337: “Why we can trust the surface temperature record” – August 28, 2010
338: “Human CO2: Peddling Myths About The Carbon Cycle” – August 29, 2010
339: “Sea level rise: the broader picture” – August 30, 2010
340: “Carbon dioxide equivalents” – September 1, 2010

As this list shows, there are no deleted posts in this time period. If there were, the link with the deleted post’s number would be broken. There are no broken links. That means no post was deleted. That gives conclusive proof Skeptical Science did not create a post to direct people to the survey for Lewandowsky’s paper.

It’s worth noting there is a post in the Wayback Machine’s snapshot not present in that list. It’s titled, “Hansen etal hit a Climate Home Run — in 1981.” That post doesn’t show up in our list because it is post number 328. The reason it shows up out of order is likely sometimes Skeptical Science uploads posts but doesn’t formally publish them so there is time to review what they say. The number is assigned when the post is uploaded, not when it formally goes live.

Advertisements

17 comments

  1. MikeN, I don’t see why we’d need to give any serious consideration to that possibility. If the number assigned to a post changed, URLs directed to it would do worse than break – they’d go to the wrong page.

    But if do want confirmation, just use 3 in the URL. You’ll find the link goes to a dead page. That’s what happens when a post gets deleted. You could ptobably find a number of other examples.

    For what it’s worth, the number assigned to each post is the number assigned to it in the database. There’s no way someone would delete a post then modify a bunch of database entries, unless they were specifically trying to cover up the existinence of a post.

  2. For thoroughness, the next two questions to answer are:
    1. Was the survey link a completely separate post or just appended to another post?
    2. Is the August 28, 2010 date accurate or misremembered?

  3. Gary, the second question is a non-issue to me as it is easy to look through any number of URLs. I only posted ten or so, but I looked through far more.

    Your first question is one where we actually have to consider different types of evidence though. As far as I’ve seen, there were no relevant topics onto which a note for the survey could have been appended. Beyond that, there are no topics with any comments discussing the survey. When Tamino made a post for the survey, people taking it talked about it. It’s highly likely the same would have happened at Skeptical Science.

    Of course, it is always possible Skeptical Science intentionally covered up evidence of them posting a link to the survey. They might have appended a note somewhere, then deleted the note and any comments discussing it. They could have then changed the comment ID numbers to cover up the fact they had deleted those comments. And if necessary, they could have even coded their servers to not display any of this to the Wayback Machine crawler when it visited.

    I think that’s a completely ridiculous scenario though. Why would they put that much effort into hiding the fact they posted something they openly state they’ve posted? That’s a lot of effort to to make themselved seem like liars.

  4. A good find. There was never really much doubt that it was never posted, but this makes it much clearer.

    By the way Brandon, with regard to Lewandowsky’s ‘meaningless correlation error’ that you illustrate here, have you seen this

    https://theconversation.com/are-you-a-poor-logician-logically-you-might-never-know-33355

    An article by Lewandowsky about people who are poor logicians but don’t realise it
    LOL 🙂 🙂 ROTFL
    I have mead a comment on the point that you and several others have pointed out.

  5. I was actually sent an e-mail directing me to that article. It’s really just more of the same. The only thing new to me in the article was a link to a paper by Stephan Lewandowsky I don’t remember having seen before (open access version here). I couldn’t find the data for that paper, and I doubt Lewandowsky would send me it, but it uses the same SEM his Moon Landing paper used. Given he did nothing to establish his data has a distribution SEM can work on (and I doubt his data is univariate, much less multivariate, normally distributed), it looks like just another paper making the same mistake.

    The whole thing is funny, in a darkly humorous way. Whenever you apply a form of analysis to data, you need to establish the data fits the assumptions of your analysis. In basic correlation testing, that means your data needs to have a normal distribution. If it doesn’t, it doesn’t fit the assumptions of the analysis being performed, meaning the math of the analysis no longer works.

    That’s incredibly basic. I don’t know how anyone could learn how to use SEM (which basically builds upon that sort of basic correlation testing) without understanding any of the basic checks you need to do to make sure your approach is valid.

  6. Another thing. You would have thought that if hundreds, or thousands even, of ‘sceptics’ had either taken part in the survey st SkS, or visited the site when the survey was posted, someone, somewhere, would remember having seen it. Yet no-one, that I’m aware of, has come forward and admitted having done so. Are we to believe that there’s been a conspiracy of silence amongst all these people?

  7. The Pearson correlation coefficient does not assume normality. It assumes linearity.

    The Pearson correlation coefficient is a sufficient statistic of the linear dependence between two variables if their distributions are normal.

    Some of the tests for the significance of the Pearson correlation coefficient assume normality. Much of our intuition of the interpretation of the Pearson correlation coefficient assumes normality. But the Pearson correlation coefficient itself does not assume normality.

  8. Richard S.J. Tol, I don’t understand how you can say these two things one after the other:

    The Pearson correlation coefficient does not assume normality. It assumes linearity.

    The Pearson correlation coefficient is a sufficient statistic of the linear dependence between two variables if their distributions are normal.

    The inverse of the latter statement is, “The Pearson correlation coefficient is not a sufficient statistic” if the distributions are non-normal. If it not a sufficient statistic without normal distributions, it is perfectly reasonable to say it assumes normal distributions.

    The claim:

    Some of the tests for the significance of the Pearson correlation coefficient assume normality.

    Is misleading as there is no possible significance test for the Pearson correlation coefficient which works for non-normal distributions. We can come up with tests which will work for certain forms of non-normality, but what tests may work will depend upon what form that non-normality takes. So while this is a common claim:

    But the Pearson correlation coefficient itself does not assume normality.

    It is only true in the most technical of senses. That is, the calculation itself does not assume normality, but the interpretation of the results does. That means while the correlation calculation itself may not assume normality, the use of such to produce results as published by Stephan Lewandowsky does.

    On a personal note, I have no problem with people raising technical minutiae like this, but I wish people who do would give the context for their remarks. It’s easy to preface a comment with something like, “Technically, that’s not true,” and it lets people know the point being raised doesn’t change the idea being expressed.

  9. I want to make the point of my previous comment in a simpler manner.

    Richard S.J. Tol is correct to say we can calculate the Pearson correlation coefficient without the data having a normal distribution. He is right in the same sense we can fit a straight line to data which is clearly in a parabolic shape. It can be done. It just won’t produce meaningful results.

    Suppose you fit a straight line to a parabola. You can accurately say, “I drew a line on my graph,” but if anyone asks what the line means, you’re probably just going to stand there with a dumb look on your face..

  10. Probably right about the renumbering. However, I had this happen with my camera, a whole month of videos disappeared, but the numbers are all sequential, no evidence of deletion. I’m still not sure what happened.

  11. There’s a huge difference here. In your case, you’re describing a change to “tables” only accessed by whatever is in control of the changes. That makes accounting for the changes a tractable problem.

    That’s not true once you start using public URLs. There is no practical way Skeptical Science could replace the numbers in all links on their site to account for a change if they renumbered posts, and there is absolutely no way it could replace those numbers in links posted on other web sites.

    As for your camera, did you try having someone restore the videos? It’s usually pretty easy to recover deleted files on cameras and phones if you know what you’re doing and you do it promptly.

  12. Brandon: The Pearson correlation coefficient has an exact interpretation for the linear dependence between two normally distributed variables. It has an approximate interpretation if either of those conditions is not met.

  13. I’ve tried a number of recovery programs, nothing can find any evidence of deletion. Usually there are gaps in the numbering just as with the links. Perhaps this camera does renumber as you suggest.

  14. DaveS, I just fished your comment out of the spam folder. I don’t know why it landed there, but I’m sorry your comment didn’t appear sooner. I rarely have to fish comments out of the spam folder so sometimes I don’t get around to checking the folder for a week or longer. I might even completely miss some comments.

    Anyway, I agree completely, save that there was no way thousands of skeptics could have taken the survey. There weren’t even two thousand total respondents!

  15. Brandon,

    This post is very important because it clearly and concisely documents the lies by Cook and Lewandowsky. Unfortunately, warmists are quite often able to hide their dishonesty behind obfuscation. This post, in a simple manner, documents uunambiguous dishonesty.

    JD

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s