Look to this piece of research by CarbonPlan, a registered non-profit public benefit corporation in California. New research shows that California’s climate policy created up to 39 million carbon credits that aren’t achieving real carbon savings. Rather than improve forest management to store additional carbon, California’s offsets program creates incentives to generate offset credits that do not reflect real climate benefits.
Carbon offsets are widely used by individuals, corporations, and governments to mitigate their greenhouse gas emissions on the assumption that offsets reflect equivalent climate benefits achieved elsewhere. These climate-equivalence claims depend on offsets providing “additional” climate benefits beyond what would have happened, counterfactually, without the offsets project. Here, we evaluate the design of California’s prominent forest carbon offsets program and demonstrate that its climate-equivalence claims fall far short on the basis of directly observable evidence. By design, California’s program awards large volumes of offset credits to forest projects with carbon stocks that exceed regional averages. This paradigm allows for adverse selection, which could occur if project developers preferentially select forests that are ecologically distinct from unrepresentative regional averages. By digitizing and analyzing comprehensive offset project records alongside detailed forest inventory data, we provide direct evidence that comparing projects against coarse regional carbon averages has led to systematic over-crediting of 30.0 million tCO2e (90% CI: 20.5 to 38.6 million tCO2e) or 29.4% of the credits we analyzed (90% CI: 20.1 to 37.8%). These excess credits are worth an estimated $410 million (90% CI: $280 to $528 million) at recent market prices. Rather than improve forest management to store additional carbon, California’s offsets program creates incentives to generate offset credits that do not reflect real climate benefits.Badgley, G., Freeman, J., Hamman, J.J., Haya, B., Trugman, A.T., Anderegg, W.R.L., and Cullenward, D. (2021). Systematic over-crediting in California’s forest carbon offsets program. BioRxiv 2021.04.28.441870.
The article in CarbonPlan Website provides an overview of how they identified crediting errors in California’s offsets program. But if you want to deeper dive on their methods and analysis, you can read their preprint (abstract quoted above). To better understand the context and implications, it is also possible to read a story by Lisa Song and James Temple published by ProPublica, an independent, nonprofit newsroom that produces investigative journalism with moral force. (Propublica story is open and you can republish it.) You can also browse an interactive online map (featured image) of the projects we analyzed, or download the open source data and code that underlies our analysis.
Covid has forced us to reconsider how the world does science. To be sure, nothing in this approach is new, but it is nice to see another initiative which is putting all together in a very conscious way and committing to this (new) way of doing science.
In order to bring this discussion into the open right away, we are making all of our materials fully public and reproducible: the digitized project database, all additional data used throughout our analysis, code to generate figures from those data, and the complete code base used for our analysis. At any point now or in the future, the entire community is welcome to review our work on its merits. We look forward to further improving our analysis based on the criticisms and collaborations that come from open science. If you have feedback, please open an issue on our GitHub repository.
Research journals and peer review have been the pillars of a very successful model, but now they must adapt or transform themselves because science is to a large extent stuck in a certain way of doing things which, to begin with, does not scale up, and which, like many other successful models, has become a myth.
Am I against peer review? By no means. I am fully in favour. Does it mean we have to accept peer review as a barrier or excuse for the bureaucratization of science? No, it doesn’t. Can we have the best of both worlds, open science (& self-publishing) and a rigorous monitoring and debate? Yes, we can. Of course! This is the same debate we had about open source. There is no better way to fight against pollution, yes also intellectual pollution, than more eyeballs!
Let me recall an often told and tender story to finish.
How many of Einstein’s 300 plus papers were peer reviewed? Very likely, only one authored with Nathan Rosen and submitted to the journal Physical Review in 1936. The journal had very recently introduced the peer review system, and the paper came back with a (correct, as it turned out) negative report. As Michael Nielsen writes, Einstein’s reply to the editor is amusing to modern scientific sensibilities:
We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the in any case erroneous comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.
P.S. Mr. Rosen, who has left for the Soviet Union, has authorized me to represent him in this matter.
The times were “a changin” at that moment, and are changing now. Again.
In case you doubt it, you may ask Mr. Einstein!
You seem to have forgotten the main reason of publishing is not the “advancement of science” but rather “the advancement of an academic career”. For that purpose, some type of control is needed.
Well, ok, publish or perish may be there, as the parasitic byproduct of a complex system (science!), but to say that the main reason for publishing is academic career is a bit far fetched, isn’t it? On the other hand, from the point of view of scholars, more options and more openess is on their side… Peer review and other means to control quality can be maintained and improved, but to me what’s important is they can coexist with open science. The problem, as always are business models. In this case, mainly journals.
Perhaps my comment was too cynical. A lot of people do it for altruistic reasons, others for political reasons, or even self publicity,,. Certainly, nobody (?) wants to write nonsensical contributions and most have a reputation to care (Einstein).
Still, the alternative to peer review could be the number of “likes” on an article. Or perhaps the number of stars given by mass reviewers, as if an scientific article were a hotel or a shop. May be this is better, but I am not sure.
I’m not against peer review, and I think it will continue to be essential. In fact, some of the initiatives around open science (open data sets, etc.) have the objective to facilitate replication and revision. It’s only about decoupling publication and peer review.
I enjoyed readinng your post