Mann’s Screw Up #3.1 – Cherry Picking

The last post in this series showed Michael Mann’s hockey stick depends on a tiny amount of data. Previous posts showed Michael Mann knew this. Today’s post is going to show how he managed to pick the data that gave the results he wanted. That’s right, I’m finally going to discuss principal component analysis.

But before we look at how Mann’s faulty implementation of PCA allowed him to cherry-pick his results, let’s remember he had two different proxies with a hockey stick shape. One (Gaspe) wasn’t created via PCA. As I said of it before:

That series was an arbitrarily extended (no other series was extended like it) version of a series already included in his data set. The extension of that series was originally undisclosed, acknowledged only after Mann was forced by his critics to publish a corrigendum saying:

For one of the 12 ‘Northern Treeline’ records of Jacoby et al. used in ref. 1 (the ‘St Anne River’ series), the values used for AD 1400–03 were equal to the value for the first available year (AD 1404).

Even after it was acknowledged, no explanation was provided for the extension, and there has never been an acknowledgment of the fact the series was used twice.

That is simple cherry-picking. He took a data series already in his data set, duplicated it, arbitrarily extended it back in time, then set it aside so PCA wouldn’t be applied to it. Unfortunately, PCA is not so simple.

Principal component analysis is a way of trying to extract signals from data. You take many series, compare them and see what is similar. The problem is Michael Mann did not look at entire series. What he did is related to something known as the “screening fallacy.” lucia at The Blackboard has a great post demonstrating it.

Basically, we know temperatures in the 1900s went up so it makes sense to look for proxy data which shows rising temperatures in the 1900s. We take whatever data that does, average it together, and we find modern temperatures are higher than they’ve ever been.

That makes sense until we realize the process guarantees the results. If you only use data which rises in the 1900s, your data is guaranteed to show rising temperatures in the 1900s. Imagine you looked at a bunch of data and found these three series within it:

Feb19_fig1

All three agree temperatures have gone up in the last period. They don’t agree about anything else though. So what would happen if you averaged them together?

Feb19_fig2

You got yourself a hockey stick. You could throw a bunch of noisy series in your data set, and you’d still get a hockey stick. The reason is everything before 1900 is basically random, but everything after 1900 is guaranteed to go up.

Now then, that’s not quite what Michael Mann did. PCA doesn’t look for specific patterns. Instead, the more the series varies from the norm, the more weight PCA gives that series. The trick is in how you define “the norm.” PCA says the norm is the average over the entire series. Michael Mann decided otherwise. What he did was define the “norm” as the average over the 1900s.

Of course, if you set your norm to a small period in which series are rising, you’ll find series tends to deviate a lot from that norm. You can see this by comparing the black line (averaged over the entire series) and red line (averaged over the last period) in this figure:

Feb19_fig3

The black line stays fairly close to 0. As such, it’d receive little weight. The red line keeps far from 0. As such, it’d receive more weight. There’s more to it than I’m describing (including two rescaling steps), but it’s all rooted in the same basic idea:

Under Mann’s approach, if a series changes after 1900 instead of before 1900, it gets more weight. And because his approach favors changes post-1900, it favors changes in the form of a hockey stick.


There is no statistical reason to use “de-centered” baselines like Mann did. Ian Joliffe, one of the people Mann used as a reference on statistics, flat-out said, “I don’t know how to interpret the results when such a strange centring is used.” The reality is nobody knows how to interpret those results because the methodology is nonsensical.

And that’s not really a matter of dispute. Mann still claims his methodology is okay, but he doesn’t focus on that. What he focuses on is the fact you can get a hockey stick if you do PCA correctly. He claims:

Curiously undisclosed by MM in their criticism is the fact that precisely the same ‘hockey stick’ pattern that appears using the MBH98 convention (as PC series #1) also appears using the MM convention, albeit slightly lower down in rank (PC series #4) (Figure 1).

That fact was not undisclosed by Steve McIntyre and Ross McKitrick (MM), but otherwise, what Mann says is true. PCA of any style will find a hockey stick. The only difference is PC1 is the 1st most important signal while PC4 is the 4th most important signal. That’s a pretty significant difference.

But really, who cares? Michael Mann only got a hockey stick because of two series. One (Gaspe) was cherry-picked in a straightforward manner. The other (NOAMER PC1) can be cherry-picked via PCA or whatever method you prefer.

It doesn’t matter. He’s still just cherry-picking a tiny amount of data.

Advertisements

13 comments

  1. Nice explanation Brandon! The methods appear “sciencey”, yet you (and Lucia and JeffID and others) have shown that the method creates the handle as well as the blade. The sciencey appearing methods do NOT follow the conventions/norms in each field (dendros generally use samples when 5 or more cores are used in a series…Gaspe at the early stages only had one core, and that core was arbitrarily extended back in time to meet a specific need; statisticians hoping to find signals from volumes of data don’t use “short centered” PCA, etc.). Torturing data to make it confess seems like an appropriate description of what happened with MBH98/99+.

    Bruce

  2. Boris, no. Users don’t get to make demands here, certainly not when they’ve contributed nothing to the discussion. If they do, I’ll just laugh at them.

    If you want something, you can ask for it. You can say it ought to be provided. You can criticize people if they don’t provide it.

    But on my blog, only I can demand anything from anyone.

  3. Brandon, scientists (and people in general) define the terms they use. If you don’t want to define what you mean, that’s fine, but ti’s impossible to evaluate what you’re saying without knowing what “hockey stick” means to you.

    I’m not sure why you think what I said was a “demand.”

  4. Boris, if you can’t figure out why telling someone to do something is giving them a demand, I’m not sure anyone can help you.

    Similarly, if you can’t figure out tweaking your nose for rudeness in no way implies I refuse to provide definitions, I don’t think anyone can help you.

  5. Better yet, make the point you wish to make. You have one definition in mind and want to say a hockey stick can be made. So do it.

  6. Bruce said, “dendros generally use samples when 5 or more cores are used in a series…Gaspe at the early stages only had one core, and that core was arbitrarily extended back in time”

    If that’s not “torturing the data”, then I don’t know what is!

  7. Has anyone used Mann’s method (short centered PCA) to calculate a result without Gaspe? In other words use Mann’s exact group of series except omit Gaspe. Put them through the short centered PCA and compare the results to MBH 98. Is it just Gaspe that corrupts the result or is it Gaspe + the the other HS shaped series (whose name escapes me)?

    My understanding has always been that you need just one HS-shaped series to be present in the input and you will be guaranteed that the Mannomatic will produce a hockey stick as an output. If this is true, then Mann’s padding of Gaspe added insult to injury, but the true culprit remains Mann’s short-centered PCA (the Mannomatic). All Mann really needs to do to perform the magic trick is to hunt around and find one naturally occurring series that has a HS shape and add it into the Mannomatic. Upside down Tiljander was an attempt to do that, I just suspect that our Nobel Prize winner didn’t realize he was using it upside down.

  8. “My understanding has always been that you need just one HS-shaped series to be present in the input and you will be guaranteed that the Mannomatic will produce a hockey stick as an output. If this is true…”

    No, that isn’t true.

  9. Anto: it’s actually worse than JUST torturing the data… it’s “creating” data where none existed! Why? We don’t know for sure because the methods don’t explain why… one can reasonably assume that it was done to meet a specific purpose. It’s not explained. It wasn’t even documented initially.

    Again, imagine what a regulatory authority would say if this type of approach were taken for a pharmaceutical study. Would anyone agree that it would be acceptable to selecting from your overall population only those patients (trees) that responded to treatment? This appears to be a major issue with the dendro community in general and brought to light by these papers.

    How about arbitrarily adding patient data to the study in order to improve your drug study results? Or using data that other peer researchers don’t use because of known quality issues??? That this work was published is shocking. That peers still defend (and those that don’t outwardly defend don’t demand retraction) is simply mind boggling.

    Bruce

  10. It’s worth remembering that Dr. Ababneh (Mann’s grad student) -resampled- the bristlecones of NOAMER PC1.

    And discovered that post 1960 (or was it 80?) they aren’t any sort of hockey stick -either-.

    The whole idea of using trees as proxies in the first place requires them to be ‘stressed’ by being part of the treeline.
    The whole idea of climate change is that treelines move.

    The entire idea that just because a series happens to agree ok -now- implies it has a similar competence outside of the observed period is similar to saying “This one works, and it will work in the past (or in the future) provided the climate didn’t change.” How on earth can you -expect- to find climate change with that?

    It’s hardly surprising that averaging -non- climate proxies together makes a flattish line.

  11. mpaul, the output of PCA is not a temperature reconstruction. PCA is just a way of combining a large number of series into a much smaller number of series(named principal components, or PCs). Michael Mann used PCA to combine ~300 series into only ~30, but he also had another ~80 series he didn’t apply PCA to at all.

    Most of that data didn’t go back to 1400 AD. Of the data that did, Mann combined ~100 series into three PCs via PCA. There were another 19 series he used without applying PCA. The resulting 22 series can be seen in this post. Once those 22 series were created, PCA was not used again.

    If you look at those 22 series, you’ll see two have hockey stick shapes. One is NOAMER PC1, created via Mann’s de-centered PCA. The other is Gaspe, a series which was simply cherry-picked. Both were cherry-picked via different methods, and as long as you include either, you get a hockey stick.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s