By: A.J. (Ashok) Kumar, VP, Sustainability Sciences at Indigo Ag
Joanie Abbott, Associate Manager, Sustainability Solutions at Indigo Ag
I’m back! Sorry it took me a bit longer between my last post and this one as I’ve been on the road talking about soil health, regen ag, and all kinds of fun things. I want to take a minute to introduce my colleague and accountability buddy, Joan Abbott, who is working with me to keep this series buzzing. For simplicity, we’ll continue to write in the first person singular, but know that Joanie is due credit here too! In my first post, I discussed how we CAN measure the impacts of regenerative agriculture on soil organic carbon at scale.
But what does it mean to measure impacts? It’s going to vary a bit by what kind of impact you care about. In the voluntary carbon market, we’re looking to measure the impact of what was done in the project compared to what would have happened without the project.
This leads me to our second misconception…
Why? Because the difference between two points in time is a measure of change, not necessarily impact.
This approach—measure a field now, measure it again in five years, and take the difference as the farmer’s impact—has a fatal flaw: You cannot distinguish the farm management impact from the weather.
Practices (like cover cropping or reduced tillage) influence soil carbon over time. But so does weather.
If a field sees a huge carbon increase, is that due to the farmer's new practices, or a run of unusually wet, cool years? If it drops, is that bad farming, or a severe drought?
You can’t separate those impacts when you only have two data points. This temporal accounting focus has proven to be a recipe for disaster and over-crediting in the forestry world.
We don't have to guess if this works in the agricultural soil carbon world. It doesn't.
Take the Australian carbon market, which uses a temporal approach. Research shows that this methodology has led to over-crediting (1). Why? “Namely, carbon credits are being awarded for changes associated with seasonal conditions (changes that would have happened anyway) rather than human actions (2).” Farmers weren't penalized for the negative swings either. As a result, a lot of credited carbon wasn't due to management. This drags down the integrity of the whole market.
Measuring changes over time is incredibly important for inventory accounting, but it’s not what carbon credits are trying to measure. Buyers of carbon credits want to know that their financing has helped cause the change.
What we truly care about is the Difference in Differences.
A difference in differences compares the change over time in the treated field (the farmer's new practice) against the change over time in a counterfactual scenario: what would have happened in that same year, with that same weather, on the same soil type, without the new practice. It’s impossible to measure the thing that didn’t happen, but we can measure proxies.
There are at least three ways to do that today:
Each of these approaches has strengths and weaknesses. Where there is strong existing literature, a modeled counterfactual can give high quality answers at scale. When a practice or product is new or a cropping system is understudied, a comparative baseline approach can provide a way forward while waiting for controlled experiments to complete and get published. In any case, it’s a good idea to look for ways to test your assumptions and the validity of the difference in difference approach you are using.
The soil sample is not the truth; it's a data point. In the soil space, folks often talk about a soil sample as “ground truth”, but it depends on what you think you are measuring. Just like a single glucose reading doesn't diagnose diabetes, a single point in time measurement at a single location doesn't prove management impact (after a decade working in the health space, I can’t resist the health analogies!). Even two measurements over time don’t prove much. The measure of a soil sample itself also raises many questions (measured in situ, core extracted, soil preparation, analysis method, etc.), but perhaps I’ll save that for another day.
The point is that no measure is perfect, but imperfect measures can still be useful if put together in the right way with accounting for their strengths and weaknesses (e.g. accounting for uncertainty). We need the full context: the practice history, the validated practice change, and the difference in differences to help us make a causal link between a change and an impact.
(1) Mitchell, E., Takeda, N., Grace, L., Grace, P., Day, K., Ahmadi, S., … Rowlings, D. (2024). Making soil carbon credits work for climate change mitigation. Carbon Management, 15(1). https://doi.org/10.1080/17583004.2024.2430780
(2) Here's how to fix Australia's Approach to Soil Carbon Credits... https://findanexpert.unimelb.edu.au/news/67192-here's-how-to-fix-australia's-approach-to-soil-carbon-credits-so-they-really-count-towards-our-climate-goals