The Stupidity, It Burns

It may be petty of me, but I enjoy highlighting the stupid things my critics say. Today’s critic goes by the name WebHubTelescope. If you’ve spent any time at Judith Curry’s blog, you know who he is. If you don’t, that’s fine. Suffice to say he’s egotistical, obnoxious and doesn’t like me. Well, also, he’s an idiot.

Over at Curry’s blog, WebHubTelescope condemned my recent post highlighting a problem with the Berkeley Earth temperature record (BEST) in very strident terms, such as:

it is looking more and more like you are intent in turning your analysis goof into a fabrication of results and use that to smear the BEST team.

And:

Listen carefully Brandon. This is not that hard. Just take the data and make a graph correctly. That was your obvious mistake. You did not do it correctly.

To justify his criticism, WebHubTelescope offers this graph:

uh

As an alternative to the comparison I made in my previous post:

7-10-home-trend

You’ll note, his graph specifies the location for which the temperatures were measured: Springfield, Illinois. Mine doesn’t. Mine doesn’t because my graphs don’t show temperatures measured for Springfield, Illinois. As I said in that post:

I recently looked at the gridded data BEST published showing its estimates of average, monthly temperatures across the globe (available on this page). After some playing around, I decided to extract the values given for the area I live. I then did the same thing with another temperature record, NASA’s GISS (available on this page).

Note, this clearly says “gridded data.” There are also links to the pages where that gridded data could be found. There can be no question I was talking about gridded data. That is, there can be no question I was talking about temperature estimates for latitude-longitude grids on the world map.

One data set I used 1º x 1º grids. The other used 2º x 2º. As I said in my post, they “aren’t completely comparable as… GISS also uses larger grid sizes.” Again, there can be no mistaking what kind of data I used. I used data provided by two groups (GISS and BEST) representing their estimates for temperatures in my area.

To make the problem more clear, look at this map of Illinois:

See those boxes? Each one of those is a 1º x 1º square. Make a square out of four of them, and you have a 2º x 2º square. You’ll note, whether you use a 1º x 1º or 2º x 2º square, there are still a number of cities listed in it. Those squares would also cover many more towns not listed on the map.

WebHubTelescope used the temperatures measured in a single city (Springfield) for his GISS data. I used the estimated temperatures for a 2º x 2º grid, which covers 20+ cities, for my GISS data. There is no way anyone who sought to understand my post could fail to note the difference.

In other words, I made one comparison. WebHubTelescope made a very different comparison, and when it didn’t match the one I showed, he started suggesting I’m an idiot, a fraud, an “F-student” and more.

In reality, I just know how to read simple sentences.

Advertisements

21 comments

  1. Nick Stokes, I didn’t post the exact grid cells because I instinctively shied away from saying where I live. Sorry about that. I should have made sure to specify which gridcell the data was from.

    Anyway, I live south of Springfield, near where it says Nashville. It should be easy to convert the lat/lon to the coordinates in the files. If not, I can pull up my code and copy the extraction commands.

  2. brandon ,you are a smart guy .i am sure your time could be spent far more productively .whilst i agree 100% with what you say, spending time on a response to an idiot like wht is still a waste of your time.

  3. Nick, see this comment for another comparison:

    You can get gridded averages from the Climate Explorer:

    http://climexp.knmi.nl/selectfield_obs2.cgi?id=someone@somewhere

    Here are the trends for 1900-2010, for the region 82.5-100W, 30-35N:

    berkeley 0.045
    giss (1200km) 0.004
    giss (250km) -0.013
    hadcrut4 -0.016
    ncdc -0.007

    Berkeley looks to be a real outlier.

    Note we’re interested in gridded data, not individual sites, because that is the quantity that is supposed to most closely compare to T_s(\vec r, t).

    bit chilly, when there is confusion, I think it does help to provide more information. Given how poorly WHT behaved on that thread, this was kid-glove treatment IMO.

  4. The region 82.5-100W, 30-35N is the US SE, one of just a few regions that has seen cooling over 1900-2010 (another is the Atlantic ocean just south of Greenland).

    Nick any chance you can give us the estimated trend from your code for this case?

  5. bit chilly, Carrick, while responding to WebHubTelescope might be a waste of time, it was also entertaining. And once I did that, it was impossible not to write this post. How could I not share such an obvious mistake missed so entirely?

    By the way, after I posted a comment linking to this and summarizing my post, he commented:

    Springfield for GISS matches Springfield for BEST.
    I have no idea how Brandon Shollenberger messed it up, but he did.

    Anybody can do the comparison, except for him.

    It’s priceless!

  6. WebHubTelescope is now taking his complaints on this topic to others sites. I just saw this comment of his:

    Willard, I have a recent example of ClimateBall for you that involves Climate Etc. It is based on a comment by a skeptic trying to prove why the BEST team should not be trusted:
    http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606998
    This was Brandon Shollenberger’s analysis of climate in Springfield, Illinois, where he tried to compare BEST results against GISS results, and asserted that BEST was exaggerating the warming.

    I started the process by saying that the technical analysis casting aspersions on BEST was completely messed up. This was very obvious, but I was not completely playing the ball. With characters such as Chewie inhabiting the blogosphere, how can you concentrate on just the ball?

    BTW, on Climate Etc the first rule of ClimateBall is never to mention ClimateBall, as apparently this seems to be a phrase that will raise the moderation flag.

    At Anders’s blog. I don’t know how to explain his insistence on sticking with this mistake, but it’s hilarious he’s now going around making it at other sites.

    Also, you should look at the topic he posted that on. It’s announcement from willard he’s been made a moderator at AndThenTheresPhysics.

  7. I see absolutely no problem with documenting WHT’s error on this blog.

    I’ve decided that Anders’ blog is a bit too ripe for me, so I’m avoiding it these days.

  8. I don’t visit Anders’s blog anymore, but I do have an RSS feed running for its comments. I set that up back when Skeptical Science members were accusing me of hacking over there. I’d delete it, but skimming the comments has provided some nice highlights, such as WebHubTelescope’s and the announcement that willard is a moderator now.

  9. I don’t normally upload scripts when creating simple plots like these, but since WebHubTelescope has made such a fuss of this, I’ve uploaded a script showing how I extracted the data I plotted.

    It’s not turnkey. You’ll have to download the data files and place them in your working directory yourself (the links to the files are included though). You may also have to manually install one of the packages (ncdf4) before being able to run the code. I had to because the CRAN mirrors didn’t have the binaries for it yet. If you need help figuring out how to do that, let me know.

    (The script doesn’t produce the exact graphs I made. It doesn’t relabel one of the axes, and it doesn’t create the linear regression lines. I was too lazy to go through my command history to extract the lines for those parts.)

  10. Carrick July 12, 2014 at 8:26 am
    “Nick any chance you can give us the estimated trend from your code for this case?”

    My code doesn’t do gridding as such. It uses (optionally) a grid to estimate station density, for weighting, but that won’t help here.

  11. How about the trend from 1900 to 2010 at the geographic center of 82.5-100W, 30-35N?

    That would be 91.25W, 32.5N.

  12. Carrick,
    Unless there is a basis for differentially weighting the stations in the cell, it’s just going to come out as an average. I do have fancy schemes for weighting like triangular mesh, which I use on the globe, but they aren’t adapted for non-mesh boundaries.
    I could (no time at the moment) calculate it, but TempLS wouldn’t add value.

  13. Brandon, thanks for the script! I’ve never played with that data or ncdf …I took on USHCN with awk, fortran and R. So this script might be a bite-sized intro…regards

    Very well played over at CE. Very strange.

  14. mwgrant, no problem. The biggest issue I had is manually installing the ncdf4 package. It’s not hard to do, but you have to know the package exists. Other than that, it’s all pretty easy. I just had to watch my RAM consumption because the BEST file is large, and the ncdf4 package will consume all of your RAM if you let it (which caused it to crash for me).

    Other than that, it’s just a matter of figuring out the coordinate system. The numbering system is done like a graph, with the X-axis going left to right, the Y-axis going bottom to top. It should only take a minute to convert lat/lon coordinates to that.

  15. As I mentioned on Judith Curry’s blog, I computed the trend (1880-2010) for various time series, using Climate Explorer to extract the data.

    I chose the latitude 39 to 40N and longitude -90 to -89E.

    Anyway here are the results:

    series slope (°C/decade)
    berkeley 0.071
    giss_r1200 0.049
    giss_r250 0.040
    ncdc 0.035
    crutemp 0.002

  16. Brandon, I have enjoyed your comments for some years. Congratulation’s on our site, your perspective is always interesting and I will read it daily!

  17. Brandon and Lucia

    “… The BEST one I used is one degree south.”

    > ncvar_get(nc,”latitude”, start= c(128), count= c(1))
    [1] 37.5
    > ncvar_get(nc,”longitude”, start= c(90), count= c(1))
    [1] -90.5

    Hmmm, I would center at lat=37.5 and long -90.5 in 2 degrees south and 1 degree west, i.e., in the 1×1 west of Cairo and Mound City. Am I missing something? (Wouldn’t be the first time.)

    BTW annotated short script did prove to be nice bite-sized netcd intro. Thanks, again.

    (I should add my congrats on the blog too. Congratulations, looks like you are getting traction.

  18. GarryD, mwgrant, thanks! I didn’t intend to make this into a blog for discussion, but I will admit it is nice to have a place where I can flesh out ideas.

    And mwgrant, you’re right about the coordinate issue. I made a mistake in my formula for converting coordinate systems that wound up shifting everything. The alignment was kept correct, so all the comparisons are accurate. They’re just accurate for a slightly different location than I intended. I’ll need to add “+1” to my equations for converting latitude/longitude to the coordinate system used in these files. Thanks for catching that.

    I’ll try to redo all the comparisons with the right locations before too long.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s