Cause Versus Effect In Feedback Diagnosis

by William Yeatman on December 31, 2007

Climate Science blog

On August 8, 2007, I posted here a guest blog entry on the possibility that our observational estimates of feedbacks might be biased in the positive direction. Danny Braswell and I built a simple time-dependent energy balance model to demonstrate the effect and its possible magnitude, and submitted a paper to the Journal of Climate for publication.

The two reviewers of the manuscript (rather uncharacteristically) signed their names to their reviews. To my surprise, both of them (Isaac Held and Piers Forster) agreed that we had raised a legitimate issue. While both reviewers suggested changes in the (conditionally accepted) manuscript, they even took the time to develop their own simple models to demonstrate the effect to themselves.

Of special note is the intellectual honesty shown by Piers Forster. Our paper directly challenges an assumption made by Forster in his 2005 J. Climate paper, which provided a nice theoretical treatment of feedback diagnosis from observational data. Forster admitted in his review that they had erred in this part of their analysis, and encouraged us to get the paper published so that others could be made aware of the issue, too.

And the fundamental issue can be demonstrated with this simple example: When we analyze interannual variations in, say, surface temperature and clouds, and we diagnose what we believe to be a positive feedback (say, low cloud coverage decreasing with increasing surface temperature), we are implicitly assuming that the surface temperature change caused the cloud change — and not the other way around.

This issue is critical because, to the extent that non-feedback sources of cloud variability cause surface temperature change, it will always look like a positive feedback using the conventional diagnostic approach. It is even possible to diagnose a positive feedback when, in fact, a negative feedback really exists.

I hope you can see from this that the separation of cause from effect in the climate system is absolutely critical. The widespread use of seasonally-averaged or yearly-averaged quantities for climate model validation is NOT sufficient to validate model feedbacks! This is because the time averaging actually destroys most, if not all, evidence (e.g. time lags) of what caused the observed relationship in the first place. Since both feedbacks and non-feedback forcings will typically be intermingled in real climate data, it is not a trivial effort to determine the relative sizes of each.

While we used the example of random daily low cloud variations over the ocean in our simple model (which were then combined with specified negative or positive cloud feedbacks), the same issue can be raised about any kind of feedback.

Notice that the potential positive bias in model feedbacks can, in some sense, be attributed to a lack of model “complexity” compared to the real climate system. By “complexity” here I mean cloud variability which is not simply the result of a cloud feedback on surface temperature. This lack of complexity in the model then requires the model to have positive feedback built into it (explicitly or implicitly) in order for the model to agree with what looks like positive feedback in the observations.

Also note that the non-feedback cloud variability can even be caused by…(gasp)…the cloud feedback itself!

Let’s say there is a weak negative cloud feedback in nature. But superimposed upon this feedback is noise. For instance, warm SST pulses cause corresponding increases in low cloud coverage, but superimposed upon those cloud pulses are random cloud noise. That cloud noise will then cause some amount of SST variability that then looks like positive cloud feedback, even though the real cloud feedback is negative.

I don’t think I can over-emphasize the potential importance of this issue. It has been largely ignored — although Bill Rossow has been preaching on this same issue for years, but phrasing it in terms of the potential nonlinearity of, and interactions between, feedbacks. Similarly, Stephen’s 2005 J. Climate review paper on cloud feedbacks spent quite a bit of time emphasizing the problems with conventional cloud feedback diagnosis.

I don’t have an answer to the question of how to separate out cause and effect quantitatively from observations. But I do know that any progress will depend on high time resolution data, rather than monthly, seasonal, or annual averaging. (For instance, our August 9, 2007 GRL paper on tropical intraseasonal cloud variability showed a very strong negative cloud “feedback” signal.)

Until that progress is made, I consider the existence of positive cloud feedback in nature to be more a matter of faith than of science.

Comments on this entry are closed.

Previous post:

Next post: