Communication with Michael Wood concerning his Dead or Alive paper

A while back, I e-mailed Michael Wood about his paper Dead or Alive: Beliefs in Contradictory Conspiracy Theories.  He has now indicated he will no longer respond to me.  As no further progress can be made in our dialogs, and I am quite dissatisfied with them, I am willing to share them.

The initial communication was over a simple data request, and his response was cordial and helpful.  I have no complaints with how that was handled.  The problems began with an e-mail which said:

Dear Michael,

I’m afraid my examination of your data has turned up a serious problem.  Your paper argues people endorse contradictory conspiracy theories, and it uses correlation coefficients for support.  However, these correlation coefficients are calculated over groups that both reject and endorse conspiracy theories.  These are the possible pairings for any two conspiracy theories:

Endorse – Endorse
Endorse – Reject
Reject – Endorse
Reject – Reject

This could be represented as:

+ +
+ -
– +
– -

A positive correlation happens when signs are identical.  That covers the first pairing where both conspiracy theories are endorsed.  However, it also covers the fourth pairing where both conspiracy theories are rejected.

The effect of this is responses which reject both of a pair of contradictory conspiracy theories will be treated, by your approach, as evidence people believe contradictory conspiracy theories.  That’s nonsensical.  If a person does not believe in any conspiracy theories, they obviously cannot believe in contradictory conspiracy theories.

Now then, this does not necessarily mean your conclusions are wrong.  It merely means your results could stem from a possibility you had not accounted for.  To examine whether or not they did, it’d be necessary to examine where the correlations originate.  I took the liberty of doing so for Study 1.

Attached to this e-mail is a zip file including several text documents.  One shows my replication of the results in your Table 1 (with minor differences as I filtered out two entries which had NULL values for simplicity).  There are three more, one for each pair of contradictory conspiracy theories highlighted in your paper.  These are of primary importance.

Each document contains a contingency table of responses for the pair of conspiracy theories.  The table also lists the amount each pair of responses contributes to the calculated correlation coefficients.  Summing the Contribution column will give the correlation coefficient for the two conspiracy theories.  For convenience, the tables are sorted on this column.

These tables confirm your calculated correlations are heavily influenced by responses which reject multiple contradictory conspiracy theories.  To demonstrate, you’ll note the correlation between “Diana killed by rogue cell of British Intelligence” and “Diana faked her own death” was given in the paper as 0.15.  Here are results taken from the corresponding C1_C16.txt file :

C1 C16 Freq Contribution
2   1   21  0.058595088
1   1   26  0.145092599

This shows people who adamantly disagreed (1) with the idea Princess Diana faked her death and strongly disagreed (1-2) with the idea Princess Diana was killed by British Intelligence are responsible for the majority of the correlation you found in these two conspiracy theories.  In fact, they contribute more than the total correlation because their correlation is partially counterbalanced by results like:

C1 C16 Freq Contribution
6   1    6 -0.050224361
5   1    8 -0.044643877

Where people believe British Intelligence killed Princess Diana and adamantly reject the idea Princess Diana faked her death – which is internally consistent.

Now then, this doesn’t show there are no people who believe in contradictory conspiracy theories.  Your results show there are some.  For example, from the same file:

C1 C16 Freq Contribution
5   3    2  0.031282153
5   2    9  0.045272664

These contradictory conspiracy beliefs do contribute to your results.  They just aren’t the primary cause of your results.  The correlation testing you used was too simplistic for the analysis you wanted to do, and that led you to misinterpret your results.

I’m afraid your results are not supported by your analysis.  It is simply impossible to use a single correlation coefficient calculated over two views amongst two groups as indicating something about one view for one group.

Brandon Shollenberger

P.S.  I’m ataching a script file with code that should be turnkey if you have the statistical programming language R installed.  It only compares two conspiracy theories, but you can compare whichever you want by changing the column references.  If you don’t have it installed, the data and code are still readable so you can see exactly what I did.  If you have any questions, please feel free to ask.

Later that day, I found out Steve McIntyre of Climate Audit had written a post discussing much of the material as me.  As such, I wrote to Wood:

Dear Michael

I just came across the post by Steve McIntyre I’m sure you’re aware of.  I know you told me someone else tied to climate blogging had requested your data, but I wanted to let you know I had no idea he was looking at your paper, much less that he had already engaged you in discussion.  If I had been aware of the discussion you two were having, I would have approached this matter differently (or have simply sat out and let you two deal with it).

I know things can get cluttered if too many people start talking, so please don’t feel obliged to respond to me separately.  My approach seems like it may be somewhat different than McIntyre’s, but the overall point should be the same.

Wood responded:

Dear Brandon,
I don’t mind responding to your email as well, though things have been getting a bit weird lately with the publicity from the climateaudit post. I’ll copy the full text of my response to Steve McIntyre at the end of this email since he left a few parts out that I think are relevant to your question as well.
In addition, there’s a more general point here that I probably should have addressed in my original response to Steve McIntyre. There seem to be two major objections here. One is that the scale is best understood if split into two distinct poles, agreement and disagreement, and the two do not necessarily have much to do with one another – that a lower level of disagreement is distinct from a higher level of disagreement, and vice versa. This isn’t really how we do it in social science just as a general rule, we see the scale, though not continuous itself, as representing a more or less continuous attitude that doesn’t have a particular discontinuity at some point that flips a switch that changes the quality of the answer from “agree” to “disagree.” In everyday life we tend to think in more binary terms so it might seem like common sense to apply the same thing to psychology, but unfortunately common sense is not usually the best approximation of how people think. That said, this is partly an empirical question – you could probably hypothesise that the strength of the correlation is moderated by the mean level of agreement or something like that, or that agreement and disagreement are statistically meaningful as separate dichotomous entities rather than just arbitrary delineations on a scale. I don’t know of any research to that effect, though, and dichotomising like that is generally frowned upon in the social sciences because it leads to a fairly serious loss of variance and thus power.
The second point is that it’s inaccurate to say that the people who agree with the idea that Diana faked her own death are also the ones who agree with the idea that she was killed. This does reflect the general pattern in the data by which an increased agreement with one implies increased agreement with the other, though as you pointed out the scores generally fall at the lower end of the scale. As I pointed out in the original response to Mr. McIntyre the magnitude of this correlation suggests that the relationship is not attributable to a response bias in which people simply express their disagreement with everything to a greater or lesser extent. Whether it continues as a linear relationship to the top of the scale or, counterintuitively, the shape of the relationship changes at some point, is something that would be best answered with a more conspiracy-minded sample, and unfortunately those are quite hard to come by.
Best regards,
Michael Wood
Realizing a technical discussion could easily get bogged down and accomplish nothing, I responded by saying:

Dear Michael,

I could write a technical explanation for why your methodology is flawed.  In fact, I did.  Then I deleted it.  Rather than discuss technical details, I’m going to simplify things.  Forget everything that’s been said before now.  Consider instead the effects of your methodology:

Suppose I survey 100 liberals, asking their views on various issues as well as how conservative/liberal they are.  Naturally, I’d find correlations.  Under your methodology, I could take those correlations and declare they prove things about what conservatives believe.  The fact I surveyed no conservatives would be irrelevant to your methodology.  The fact my data has absolutely no information about conservative views wouldn’t matter – I’d still be able to conclude things about conservatives’ beliefs.

To demonstrate the full absurdity of this, I’ll take things to the extreme.  Suppose in my previous example one of the questions I asked was, “Do you think slavery is bad?”  Every liberal I surveyed would say yes.  That’d give me a high correlation between liberal views and opposition to slavery.  Via your methodology, I could then conclude, “Conservatives strongly support slavery.”

I could ask women, “Have you ever murdered somebody?” then publish a paper which says, “All men are murderers.”
I could ask global warming skeptics, “Are you from outer space?” then publish a paper which says, “Global warming movement acknowledged to be an alien conspiracy!”

There is no limit to this.  Your methodology would allow us to “prove” any negative characteristic exists for any group.  There is no way to justify it.  One doesn’t need a technical explanation.  It’s obvious a methodology is nonsensical if it can prove things about groups no information exists for.

That said, I can provide a technical explanation for why this happens if you’d like.

Brandon Shollenberger
Michael Wood’s final response to me was:
Dear Brandon,
Though I appreciate the effort to demonstrate your point, I’m not having trouble understanding what you’re saying. I just disagree with the premise. Moreover, the examples that you’ve provided here don’t make mathematical sense. If I survey a large group of liberals and they all give the same response regarding slavery, the correlation between political orientation and slavery would be impossible to compute because there is no variation in either variable. If we instead gave that group of liberals a left-right political orientation scale on which they placed themselves, and there was some variation in that variable but not in the slavery variable, we would get the same result. You can’t compute a correlation coefficient in the absence of variation – this is very basic statistical knowledge. I understand that you think the sample we used in the 2012 paper had no variation in terms of belief in the Diana conspiracies, but that only comes remotely close to being true (and not even then) if you dichotomise the scale, a process for which there is simply no statistical justification. Dichotomising data kills variability and causes a massive reduction in statistical power, and even when data are (rarely) dichotomised the accepted procedure is generally a median split rather than just picking a point on a scale.
A closer example to the Diana study would be a study that, despite having a mostly liberal sample, showed that people within that sample who scored higher on a measure of right-wing ideology tended to have more negative views on abortion. Given these results it would be quite justified to report a correlation between right-wing ideology and views on abortion, despite the restricted range of the sample. This is because we don’t have any reason to believe – either in the case of Diana or this hypothetical abortion study – that there is some discontinuity at the midpoint of the scale that changes the form of the relationship between the two variables (there’s no shortage of examples of people holding mutually contradictory beliefs at the same time), and in fact if we had a larger data set we would by default test for a linear effect anyway. There is no sharp change in political attitudes that occurs suddenly at the midpoint of the left/right scale, and agreement with a proposition is likewise not a simple thing that is well-captured by two response categories. While I understand that this is counterintuitive and does not mirror the way most people think about agreement and disagreement, our current psychological knowledge base indicates that the variation in degrees of agreement is best captured by ordinal or scale responses rather than a simple binary measure of agreement. Of course, clearly neither the abortion example nor the real study is conclusive on its own, and both would benefit from replication with a broader sample. We don’t consider anything “proven,” just that the evidence suggests a certain result, and this is of course amenable to confirmation or disconfirmation by future research.
While I appreciate your inquiries on this topic, this must be my last email to you, as I am quite busy with teaching these days and have limited time for research-related activities. However, I would encourage you to read more about issues of measurement in psychology if you find the topic interesting.
Best regards,
Michael Wood
I e-mailed him informing him of my dissatisfaction, and it appears it’ll be the final message between us:

Dead Michael,

I understand you say your last e-mail will be the last e-mail you send to me.  That’s your call.  However, on at least two occasions in your last e-mail you directly misrepresent what I’ve said or what I believe.  You do so while attaching comments to your e-mail like “the examples that you’ve provided here don’t make mathematical sense” and “this is very basic statistical knowledge.”  It is cheeky to misrepresent a person then use those misrepresentation to put him down.

I’ll limit myself to the two most blatant misrepresentations.  First, you say:

If we instead gave that group of liberals a left-right political orientation scale on which they placed themselves

Yet my example specifically said the respondents would be asked to rate “how conservative/liberal” they are.  You suggested we could instead use a scale that is almost identical to a scale listed in my example.  That is directly misrepresenting what I said.  You implied I lack statistical knowledge based partially upon your failure to read a simple sentence.

The remainder of your implication is based upon a lack of charity.  It is true I wrote several questions as yes/no questions instead of scaled questions.  That was a mistake.  It was a mistake, however, that could have been recognized by simply trying to understand what I might have been saying.  Had you not jumped to the conclusion that I lack “basic statistical knowledge,” you’d have readily seen the point I was making.  After all, it is the same point I had made in my previous e-mail to you.  This brings me to the second misrepresentation.  You say:

I understand that you think the sample we used in the 2012 paper had no variation in terms of belief in the Diana conspiracies

This is not true at all.  I have never said anything of the sort.  In fact, my earlier e-mail to you included program code used to produce (also included) results which showed there is “variation in terms of belief in the Diana conspiracies.”  I cannot fathom how I could show results which demonstrate variation exists yet have you claim I believe it does not.

You may believe you are “not having trouble understanding what [I'm] saying,” but given you’ve directly misrepresented things I’ve said and things I believe, I think you’re wrong.  It may be partially because I messed up by phrasing questions as yes/no in my last e-mail rather than agree/disagree, but that cannot explain things like you directly ignoring parts of my example in order to suggest those parts as though they were your idea.

Brandon Shollenberger
I also observed elsewhere:

By the way, I’m actually being generous to Michael Wood. It is not unheard of for yes/no questions to be asked with a scaled set of responses. You can have categories like, “Definitely,” “I think so,” “I don’t know, “I don’t think so” and “Definitely not.” I’ve taken a number of surveys like that. As such, my questions weren’t even wrong.

But it’s like Steve McIntyre has observed elsewhere – People defending work will often take anything they can portray as error as proof your criticisms are wrong. It doesn’t matter if you make an error or not. It doesn’t matter if the (supposed) error affects your argument or not. Unless you write without including anything that could possibly be taken as mistaken, they can misunderstand and misrepresent your arguments.

Of course, if you did miraculously write your case perfectly, they could just ignore you, or as seen above, simply ignore what you write and respond to fabrications.

As of this moment, I don’t intend to pursue matters with Wood or his co-authors any further.  I should write a discussion of the technical components to Wood’s analysis though.  It’d be good to have for people who want to know, and I may find a use for it at some point.  After all, Wood and his co-authors are not the only ones using his ridiculous methodology – not by far.

About these ads

3 comments

  1. If I understand Wood correctly, he’s saying that we should view people’s responses to scaled questions in a certain way (as lying on a continuum rather than being binary yes/no reponses) not because that’s the way people are, but because the results have more statistical “power” that way – i.e. we have more chance of having something significant to interpret.
    The net result is surely that we can no longer interpret anything in the data as referring to human beings and what they think. All we can talk about is the statistical variations of the variables we’re measuring with respect to each other. By going for statistical power, they’ve removed human beings from psychology. Have I got that right?
    The problem is that laymen like journalists aren’t aware of this, and interpret the results as being about real live people, and say things like “people who are x are more likely to be y”.
    Apologies for not expressing myself statistically correctly. I’m struggling with this idea. It would explain the fact that many papers in the social sciences give no raw figures or percentages of how many people think or or believe this or that.

  2. He’s arguing something like that. Crudely put, Wood’s analysis assumes there is a specific relationship that’s consistent along the entirety of the scales. If that’s true, dichotomizing the data will reduce statistical power without providing any new information. This assumption is not laid out, much less justified, within the paper. That means a fundamental aspect of the paper is predicated upon vigorous hand-waving done behind the curtains.

    More importantly, Wood’s argument is a non-sequitur. He addresses the value of dichotomizing the data for analyzing beliefs. That’s not why I did it. I dichotomized the data was to show the relationship between the two scales. My results showed the relationship between scales was not continuous. The first e-mail criticizing the analysis quoted results showing Wood’s data with positive correlation at one end of a scale and negative correlation at the other end. That directly demonstrates the assumption Wood’s analysis relies on is false. It proves the data does not have a continuous, linear relationship as he knows his analysis requires:

    Whether it continues as a linear relationship to the top of the scale or, counterintuitively, the shape of the relationship changes at some point, is something that would be best answered with a more conspiracy-minded sample, and unfortunately those are quite hard to come by.

    I directly showed it does not continue as a linear relationship by dichotomizing the data. He ignored the purpose and results of the test, misrepresenting it as a different analysis rather than a way of checking the validity of his analysis.

    In case it isn’t clear what I mean, I’ll use an example from climate blogging. When criticizing MBH’s temperature reconstruction, Steve McIntyre showed what results would look like if one made certain changes. Some critics misrepresented this as McIntyre promoting those results as an alternative reconstruction. They then criticized those results, saying the lack of statistical skill for them means McIntyre’s criticisms must be wrong.

    That’s effectively the same as what Wood did. He took my sensitivity testing (what happens when you dichotomize data) as a new analysis then said it’s a bad one. In doing so, he completely ignored the point of my testing, that his data clearly invalidates his assumptions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s