The models of climate change

I just returned from the MS4 conference. It is the fourth year that a group of philosophers of science have gathered to try to tease apart the implications of computer simulation in science. My interest in computer simulation is in its uses in ecology (see the abstract for my paper if you are interested), but for me, some of the most captivating work of this kind is being done on climate models, in which simulation is used to try to sort out the implications of our warming planet. Philosophers try to pick out what science is doing, it examines its assumptions and attempts to cut the lines of demarcation between what is good and bad science. Science studies the world, philosophers study the science. Sort of like judicial review in laws (don’t take this too far, scientists hardly ever pay attention to what philosophers are saying).

Several presentations gave harsh criticism of climate science models. Bayesian tools (a statistical technique) were given some especially harsh criticisms. Everyone agreed the models were problematic in some sense or another. That the results were subject to all kinds of errors and suspicions, and there were substantially difficult difficulties to sort out. Several presentations tackled and dissected these problems—Including a nice historical exploration of the use of climate models from the early 70’s until the early 90’s when the consensus was gathering that climate change was real (a few remember the claim of global cooling, which was made because they were only looking at shading from volcanic ash—science quickly made the correction).

These are the people best equipped to look at the adequacy of this modeling effort. Despite this, everyone concurs the models are robust (this is a technical term in philosophy of science that means that multiple independent models, constructed by independent investigators or teams of investigators converge to the same story). No one disagreed that the planet was warming, that we are in for major changes if action is not taken, and that we had best prepare for those changes because they will be substantial.

This entry was posted in Ethics, The Environment. Bookmark the permalink.

25 Responses to The models of climate change

  1. Joseph Smidt says:

    Interesting report.

    I will say, (and this seemed to be inferred from from the post), that it is so easy to criticize a tool like Bayesian Statistics without doing the hard thing: finding a better tool to use in its place.

    Cosmologists love Bayesian statistics because with it we can (and have) decoded volumes of physics that has gone on since the beginning of the universe. It is largely Bayesian statistics that lets us know: how old the universe is, how fast is it expanding, what’s its fundamental shape (flat), how much matter/dark energy/etc…

    And it’s true, we are aware of philosophical issues with the use of Bayesian statistics, but at the end of the day we have a tool that is useful and the critics have…. philosophy.

    Don’t get me wrong, I love philosophy. I’m just saying really helpful thing would not be to say that Bayesian statistics is wrong but to come up with a better tool to replace it with.

  2. Jack says:

    It would be nice to come up with a better tool — especially when the price for not doing so could range in untold trillions of tax-payer’s dollars.

  3. Joseph Smidt says:

    Jack, I don’t think you have to worry about “trillions” being waisted. Science seems to come out on top in the end.

    It is true (and sometimes scientists aren’t willing to admit this) there are some skeletons in the “data analysis” closet. (Whether it is philosophical issues with Bayesian statistics or yes… even the use of the occasional fudge factor, etc…)

    However, science is not a conspiracy, and I think history demonstrates that at the end of the day scientists to ultimately arrive at the correct answer.

  4. SteveP says:


    Sorry, I could have been clearer there. He wasn’t criticizing Bayesian stats as such or when properly used. He was criticizing the way they had been employed.

  5. Joseph Smidt says:

    SteveP, I see. You do a great job here!

  6. PaulM says:

    I am a modeler by profession though I model markets and economies rather than climate. I would not categorize my work as scientific. The work I do borrows methods and approaches from science but is more art than science. I happen to be pretty good (meaning accurate) at what I do but that doesn’t make me any more scientific than the next guy.

    I have a healthy respect for Baynesian theory and it’s utility but I also firmly comprehend it’s limitations. The same theory that would not reject the hypothesis of anthropogenic causes of global warming also would not reject the hypothesis that ice cream consumption causes drowning deaths. Everything about Baynesian theory is expressed in terms of probabilities.

    When considering the philosophical merits of statistical modeling in the anthropogenic driven climate change debate the philosopher most relevant is Pascal (and his wager).

  7. not reject the hypothesis that ice cream consumption causes drowning deaths or that honey causes SIDS.

    Which is how they found the connection between raw honey, botulism spores and deaths in infants less than a year old.

  8. Sir Thinksalot says:

    SteveP: Did the models also model the expected warming due to the fact that we have been coming out of an ice age for the last 10,000 years and correct for that natural tendency to see how much was caused by human activity — and how much humans could really control the global weather?

  9. Jared* says:

    I admit that I don’t deal with mathematical models, but perhaps too many people look at them as attempts to divine the future. It seems to me that a better way to look at them is as tools that provide a range of plausible outcomes.

    Question: do retrospective models have the same weaknesses as prospective? As I understand it, climate modelers are unable to produce the observed warming unless they factor in anthropogenic CO2. At a gut level, that seems to have more epistemic strength, and should therefore receive more attention.

  10. SteveP says:

    Jared*, that’s exactly right. These are strong simulations, as good as the kind that fly planes and manage air traffic control (which publicly seem to be trusted a great deal more).

    Sir Thinksalot, Of course. These models are critiqued much more deeply and with more rigor than the denier crowd with there little toy picks-at-s, and things they bring up on their websites, and think they’ve discovered the scientists are missing (or hiding) can possibly even imagine. They work hard at trying to break the model themselves, using all available data, and Jared* is right–Only humans can make the models do what we see happening.

    These models are very good. But most convincing as I’ve written here many times before, is the on the ground ecological science documenting the changes that are happening too fast for ecosystems to respond. Something we’ve only seen in Earth’s history a few times (the major extinctions like at the Permian and Cretaceous boundaries).

  11. Mark D. says:

    Which models correctly predicted the non-rise in global temperatures over the past decade?

  12. Owen says:

    Sir Thinksalot and Mark D:

    Everything you can come up with that the climate scientists have “missed” are ideas that actually come from those climate scientists. Fox News doesn’t have any special new sources of information. The only reason you know any of complications with the models is because climate science is rigorous in its self-criticism. The reason so many people disbelieve climate science is the same reason so many people fail to wash their hands after peeing or to wear a seat belt when driving.

    Betting against science is a losing proposition for individuals, societies, and religions.

  13. Sir Thinksalot says:

    Owen & SteveP: Why so defensive at an honest question? I suggest that none of these models predicted the last 12 years of dropping temperatures. It would be good if our theories actually matched the actual results of temperatures. I’m still undecided if the man-made CO2 explains the data — but the notion that ecosystems are changing faster than they have in the past is not a part of a climate model and requires a lot more model data regarding ecosystems than these climate models address.

    So once again — honest questions — given the failure of these modeling attempts in the past, is there reason to believe that these models accurately predict future climate behavior? Are there any models that were accurate enough to predict the various temperature and climate ranges over the last decade? What makes us think we have a better handle on this data now than in the past?

  14. Jared* says:

    12 years of dropping temperatures? I’m sorry, but no.

  15. SteveP says:

    Thanks Jared* (and, by the way, whose site is one of the most informative science blog on the internet, has wonderful summaries of this issue

    And Owen, nice call. It is interesting how the deniers use (mis) the data gathered and analyzed by the climate scientists but are under no obligation to go through any rigorous peer review and use all of the data, but rather, pick an choose what they want to bring out, and badly misrepresent the realities.

    I’m not sure what modeling failures you are referring to Sir T. Mark D. is looking at non-peer reviewed reporting. There are websites that make this claim. I suppose if it’s on the internet we can trust it :). True 2008 was cool, but we know exactly why. No one was surprised.

    The data from ecosystem change is not coming from models, it’s coming from on the ground ecological studies worldwide. One of the greatest concerns is the rising ph of the oceans because they are loosing their ability to buffer C02 due to ocean warming. Here’s the abstract of a recent article that lays out the worries nicely:

  16. David H Bailey says:

    Where do rumors like “12 years of dropping temperatures” come from? I honestly don’t know.

    The facts say the complete opposite: Careful NASA satellite measurements confirm that the decade ending 31 Dec 2009 was the warmest decade since accurate record-keeping began:

    March 2010 was the warmest March on record (combined sea and land temperatures), and the Jan-Mar 2010 quarter was the fourth-warmest Jan-mar period on record:

    Come to the party! It’s getting warm.

  17. PaulM says:

    Jared* & SteveP:

    The fact that existing climate models don’t reconcile without factoring human carbon production into the equasion is not evidence that the models are adequate even if the data used to develop the models is valid and trustworthy. First, such fact does not avoid the logical fallacy of cum hoc ergo propter hoc. Human carbon production may merely correlate with some other variable that serves as the real catalyst for climate change. Second, the real value of a model is proven through its ability to accurately predict future states over a sustained period of time. Because I can build a model that includes specious variables that will still reconcile with historical data that does not mean that the model will provide any value when applied to events that have yet to occur. But even a model that reconciles with history and accurately predicts the future still does not solve the problem with the logical fallacy above. To do that one has to be able to experiment with the variables and determine if controlled fluctuations in those variables result in predictable outcomes (this is finally where actual science is employed).

    The problem with climate models are many but I’ll address two here. First, the time horizon required to test their predictive value is so long no one working on those models today will be alive when that value is determined– in fact, the individuals who will be measuring that value won’t be born for another four or five generations. Second, there seems to me to be an ethical problem with the science behind the models. As mentioned above, the science component of modeling involves the controlled manipulation of the variables that drive the model. In climate models there are “natural” and “human” inputs. The natural inputs are by definition uncontrollable which leaves the human inputs as the only avenue for model testing. A basic ethical tenet of science is that human subjects should voluntarily submit to participation in any controlled environment and should be fully informed about the experiment and its potential impact on the individual. Climate science violates both of these ethical requirements.

    As I said before, I make my living in the modeling biz. I think they are extremely useful when properly employed. The climate models are not mature enough to provide any real value.

  18. SteveP says:

    Paul, interesting take. Let me address some things.

    “Human carbon production may merely correlate with some other variable that serves as the real catalyst for climate change.”

    No, we have a really good handle on the greenhouse effect of carbon dioxide. The science here is solid and we need no ‘common cause’ explanation here. In fact, if a group of warm-blooded aliens arrived who wanted a little warmer earth, one of the quickest ways would be to take the hydrocarbons under the surface in the form of coal and petroleum and put them in the air. We have lots of good science on the relationship of carbon dioxide and greenhouse effect. This has been known for over 100 years. If there is another correlate then someone should propose it. There is nothing though. People making up such possibilities are offering nothing more than a distraction on what we know with some clarity. And those distractions are not good science.

    Of course one can and should be skeptical of models, but as a modeler you should be familiar with the concept of robustness. Lots of independent models, from lots of labs and many countries are saying the same thing. Such models should be payed attention to.

    These are not weather models, their long term predictions are well understood to be given if things don’t change. For example the speaker referred to in the post pointed out that if the amazon basin vegetation disappeared (which is happening) the models will not capture such contingencies and calamities. They just show what happens on an Earth that remains as it now is.

    Your ethical claim has no baring on this. No one is proposing an experiment on humans. A better ethical example is a doctor who says, “All the laboratory tests, from many independent labs, say you have cancer, we have 50 independent models (representing the best modeling practices we know) that say this cancer will kill you if we don’t act now, but hey your lifestyle will likely suffer (maybe, maybe not) if we are wrong, and well, there’s no way to be 100% sure, this is science after all, so let’s just put off treatment until we are sure. OK?

    That’s the proper analogy.

  19. David H Bailey says:

    “It is largely Bayesian statistics that lets us know: how old the universe is, how fast is it expanding, what’s its fundamental shape (flat), how much matter/dark energy/etc.”

    I’m not sure I agree that the latest techniques used in cosmology can be described as “Bayesian statistics.” You might look at the “design a universe” tool available at:

    As you will see, there really isn’t much wiggle room in the basic parameters that can fit the data well. Our knowledge of the age of the universe, Hubble’s constant, etc. is remarkably strong now.

    I discuss some of this in my article on the big bang and cosmology, available at:

  20. SteveP says:

    David thanks (sorry about the delay, for some reason you were sent to the mod queue and I just discovered it).

    And David runs another stellar science web. Be sure and check it out here.

  21. Jared* says:

    Wait, why is it that pumping all those tons of carbon into the atmosphere is business as usual, but trying to reduce that output constitutes an unethical experiment?

    That’s like saying dumping chemical X, which is known to have nasty effects on lab rats, into the river is fine, but trying to reduce the amount dumped is unethical since we don’t know exactly what concentration will cause birth defects, cancer, etc. in humans.

  22. Owen says:

    It’s unethical because the people suggesting it are willing to adjust their world views and possibly even their behavior based on new information. This is clearly sub-human behavior, since it is akin to, for example, my dog, who doesn’t defecate in his own sleeping area, unlike humans.

    Kidding aside though, there isn’t much difference between my dog and the natural man: both operate on such a short-sighted time horizon that destroying the continual availability of one’s preferred goods is perfectly normal behavior. The difference is my dog would eat every chicken in the yard because he is cognitively incapable of stewardship, whereas with humans this behavior is a result of capacity unused rather than capacity not possessed. It is just neurochemically more comfortable to ignore challenging information, especially if it involves evaluating probabilities and risk assessment.

    BTW, I’m glad that the scientific establishment doesn’t hold to this “if you can’t do a controlled experiment it isn’t science” bunk. If this were the standard (i.e. no quasi-experimentation, no qualitative research), we wouldn’t know a whole lot about much of anything. This tack also belies a certain naivete about exactly how bulletproof controlled experiments are. Yes, they are the most powerful single tool we have in making causal arguments, but they cannot operate independent of theory and do not preclude the use of other methods for establishing time precedence and and nonspuriousness.

  23. Mark D. says:

    It is well documented that GISS data has been systematically distorted, and not just because of poor site selection and the urban heat island effect. The maintainers appear to be in the habit of editing out data they don’t like, discarding rural temperature stations, etc, leading to massive upward adjustments relative to previous versions of their published datasets.

    Much more reliable satellite data does show tropospheric temperature increase averaging ~0.15 C / decade over the past thirty years, but no net rise over the past ten.

  24. peckhive says:

    Mark, You may not be aware that is a famous denier site which publishes nothing in the peer review, and badly misinterprets what’s there. It looks legitimate and tries to come off as authoritative. But they are just another ‘alien abduction site.’ If they have a story, let them publish it with the scientists. Of course that would require analysis, rigor and accurate reporting of all the data rather than these piecemeal pieces where they point to some peer reviewed articles but are highly selective and mask the real story. They use a bunch of jargon, mixed with selective data, to give the air of legitimacy, but there is nothing there that would pass peer review.

    That’s why I say stick to the science. Follow the peer review. It’s notoriously hard to evaluate the information reported on these kinds of ‘straight to internet’ sites. If there are things wrong the first ones to spot it will be he scientists, they are under strong incentives to find the mistakes, not hide the truth despite the stories of conspiracy.

  25. mfranti says:

    so so so sad this thread is dead.

    such a good conversation!

    steve, thanks for the great blog (and if this goes into mod first, you can delete it)

Leave a Reply

Your email address will not be published. Required fields are marked *