Tag Archives: climate change

A new agricultural electricity use forecast method holds promise for water use management

Agricultural electricity demand is highly sensitive to water availability. Under “normal” conditions, the State Water Project (SWP) and Central Valley Project (CVP), as well as other surface water supplies, are key sources of irrigation water for many California farmers. Under dry conditions, these water sources can be sharply curtailed, even eliminated, at the same time irrigation requirements are heightened. Farmers then must rely more heavily on groundwater, which requires greater energy to pump than surface water, since groundwater must be lifted from deeper depths.

Over extended droughts, like between 2012 to 2016, groundwater levels decline, and must be pumped from ever deeper depths, requiring even more energy to meet crops’ water needs. As a result, even as land is fallowed in response to water scarcity, significantly more energy is required to water remaining crops and livestock. Much less pumping is necessary in years with ample surface water supply, as rivers rise, soils become saturated, and aquifers recharge, raising groundwater levels.

The surface-groundwater dynamic results in significant variations in year-to-year agricultural electricity sales. Yet, PG&E has assigned the agricultural customer class a revenue responsibility based on the assumption that “normal” water conditions will prevail every year, without accounting for how inevitable variations from these circumstances will affect rates and revenues for agricultural and other customers.

This assumption results in an imbalance in revenue collection from the agricultural class that does not correct itself even over long time periods, harming agricultural customers most in drought years, when they can least afford it. Analysis presented presented by M.Cubed on behalf of the Agricultural Energy Consumers Association (AECA) in the 2017 PG&E General Rate Case (GRC) demonstrated that overcollections can be expected to exceed $170 million over two years of typical drought conditions, with the expected overcollection $34 million in a two year period. This collection imbalance also increases rate instability for other customer classes.

Figure-1 compares the difference between forecasted loads for agriculture and system-wide used to set rates in the annual ERRA Forecast proceedings (and in GRC Phase 2 every three years) and the actual recorded sales for 1995 to 2019. Notably, the single largest forecasting error for system-wide load was a sales overestimate of 4.5% in 2000 and a shortfall in 2019 of 3.7%, while agricultural mis-forecasts range from an under-forecast of 39.2% in the midst of an extended drought in 2013 to an over-forecast of 18.2% in one of the wettest years on record in 1998. Load volatility in the agricultural sector is extreme in comparison to other customer classes.

Figure-2 shows the cumulative error caused by inadequate treatment of agricultural load volatility over the last 25 years. An unbiased forecasting approach would reflect a cumulative error of zero over time. The error in PG&E’s system-wide forecast has largely balanced out, even though the utility’s load pattern has shifted from significant growth over the first 10 years to stagnation and even decline. PG&E apparently has been able to adapt its forecasting methods for other classes relatively well over time.

The accumulated error for agricultural sales forecasting tells a different story. Over a quarter century the cumulative error reached 182%, nearly twice the annual sales for the Agricultural class. This cumulative error has consequences for the relative share of revenue collected from agricultural customers compared to other customers, with growers significantly overpaying during the period.

Agricultural load forecasting can be revised to better address how variations in water supply availability drive agricultural load. Most importantly, the final forecast should be constructed from a weighted average of forecasted loads under normal, wet and dry conditions. The forecast of agricultural accounts also must be revamped to include these elements. In addition, the load forecast should include the influence of rates and a publicly available data source on agricultural income such as that provided by the USDA’s Economic Research Service.

The Forecast Model Can Use An Additional Drought Indicator and Forecasted Agricultural Rates to Improve Its Forecast Accuracy

The more direct relationship to determine agricultural class energy needs is between the allocation of surface water via state and federal water projects and the need to pump groundwater when adequate surface water is not available from the SWP and federal CVP. The SWP and CVP are critical to California agriculture because little precipitation falls during the state’s Mediterranean-climate summer and snow-melt runoff must be stored and delivered via aqueducts and canals. Surface water availability, therefore, is the primary determinant of agricultural energy use, while precipitation and related factors, such as drought, are secondary causes in that they are only partially responsible for surface water availability. Other factors such as state and federal fishery protections substantially restrict water availability and project pumping operations greatly limiting surface water deliveries to San Joaquin Valley farms.

We found that the Palmer Drought Stress Index (PDSI) is highly correlated with contract allocations for deliveries through the SWP and CVP, reaching 0.78 for both of them, as shown in Figure AECA-3. (Note that the correlation between the current and lagged PDSI is only 0.34, which indicates that both variables can be included in the regression model.) Of even greater interest and relevance to PG&E’s forecasting approach, the correlation with the previous year’s PDSI and project water deliveries is almost as strong, 0.56 for the SWP and 0.53 for the CVP. This relationship can be seen also in Figure-3, as the PDSI line appears to lead changes in the project water deliveries. This strong relationship with this lagged indicator is not surprising, as both the California Department of Water Resources and U.S. Bureau of Reclamation account for remaining storage and streamflow that is a function of soil moisture and aquifers in the Sierras.

Further, comparing the inverse of water delivery allocations, (i.e., the undelivered contract shares), to the annual agricultural sales, we can see how agricultural load has risen since 1995 as the contract allocations delivered have fallen (i.e., the undelivered amount has risen) as shown in Figure-4. The decline in the contract allocations is only partially related to the amount of precipitation and runoff available. In 2017, which was among the wettest years on record, SWP Contractors only received 85% of their allocations, while the SWP provided 100% every year from 1996 to 1999. The CVP has reached a 100% allocation only once since 2006, while it regularly delivered above 90% prior to 2000. Changes in contract allocations dictated by regulatory actions are clearly a strong driver in the growth of agricultural pumping loads but an ongoing drought appears to be key here. The combination of the forecasted PDSI and the lagged PDSI of the just concluded water year can be used to capture this relationship.

Finally, a “normal” water year rarely occurs, occurring in only 20% of the last 40 years. Over time, the best representation of both surface water availability and the electrical load dependent on it is a weighted average across the probabilities of different water year conditions.

Proposed Revised Agricultural Forecast

We prepared a new agricultural load forecast for 2021 implementing the changes recommended herein. In addition, the forecasted average agricultural rate was added, which was revealed to be statistically valid. The account forecast was developed using most of the same variables as for the sales forecast to reflect similarities in drivers of both sales and accounts.

Figure-5 compares the performance of AECA’s proposed model to PG&E’s model filed in its 2021 General Rate Case. The backcasted values from the AECA model have a correlation coefficient of 0.973 with recorded values,[1] while PG&E’s sales forecast methodology only has a correlation of 0.742.[2] Unlike PG&E’s model almost all of the parameter estimates are statistically valid at the 99% confidence interval, with only summer and fall rainfall being insignificant.[3]

AECA’s accounts forecast model reflects similar performance, with a correlation of 0.976. The backcast and recorded data are compared in Figure-6. For water managers, this chart shows how new groundwater wells are driven by a combination of factors such as water conditions and electricity prices.




Advanced power system modeling need not mean more complex modeling

A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.

But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.

Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.

The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.

We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.

Calculating the risk reduction benefits of closing Germany’s nuclear plants

Max Aufhammer at the Energy Institute at Haas posted a discussion of this recent paper reviewing the benefits and costs of the closure of much of the German nuclear fleet after the Fukushima accident in 2011.

Quickly reading the paper, I don’t see how the risk of a nuclear accident is computed, but it looks like the value per MWH was taken from a different paper. So I did a quick back of the envelope calculation for the benefit of the avoided consequences of an accident. This paper estimates a risk of an accident once every 3,704 reactor-operating years (which is very close to a calculation I made a few years ago). (There are other estimates showing significant risk as well.) For 10 German reactors, this translates to 0.27% per year.

However, this is not a one-off risk, but rather a cumulative risk over time, as noted in the referenced study. This is akin to the seismic risk on the Hayward Fault that threatens the Delta levees, and is estimated at 62% over the next 30 years. For the the German plants, this cumulative probability over 30 years is 8.4%. Using the Fukushima damages noted in the paper, this represents $25 to $63 billion. Assuming an average annual output of 7,884 GWH, the benefit from risk reduction ranges from $11 to $27 per MWH.

The paper appears to make a further error in using only the short-run nuclear fuel costs of $10 per MWH as representing the avoided costs created by closing the plants. Additional avoided costs include avoided capital additions that accrue with refueling and plant labor and O&M costs. For Diablo Canyon, I calculated in PG&E’s 2019 ERRA proceeding that these costs were close to an additional $20 per MWH. I don’t know the values for the German plants, but clearly they should be significant.

Nuclear vs. storage: which is in our future?

Two articles with contrasting views of the future showed up in Utility Dive this week. The first was an opinion piece by an MIT professor referencing a study he coauthored comparing the costs of an electricity network where renewables supply more than 40% of generation compared to using advanced nuclear power. However, the report’s analysis relied on two key assumptions:

  1. Current battery storage costs are about $300/kW-hr and will remain static into the future.
  2. Current nuclear technology costs about $76 per MWh and advanced nuclear technology can achieve costs of $50 per MWh.

The second article immediately refuted the first assumption in the MIT study. A report from BloombergNEF found that average battery storage prices fell to $156/kW-hr in 2019, and projected further decreases to $100/kW-hr by 2024.

The reason that this price drop is so important is that, as the MIT study pointed out, renewables will be producing excess power at certain times and underproducing during other peak periods. MIT assumes that system operators will have to curtail renewable generation during low load periods and run gas plants to fill in at the peaks. (MIT pointed to California curtailing about 190 GWh in April. However, that added only 0.1% to the CAISO’s total generation cost.) But if storage is so cheap, along with inexpensive solar and wind, additional renewable capacity can be built to store power for the early evening peaks. This could enable us to free ourselves from having to plan for system peak periods and focus largely on energy production.

MIT’s second assumption is not validated by recent experience. As I posted earlier, the about to be completed Vogtle nuclear plant will cost ratepayers in Georgia and South Carolina about $100 per MWh–more than 30% more than the assumption used by MIT. PG&E withdrew its relicensing request for Diablo Canyon because the utility projected the cost to be $100 to $120 per MWh. Another recent study found nuclear costs worldwide exceeded $100/MWh and it takes an average of a decade finish a plant.

Another group at MIT issued a report earlier intended to revive interest in using nuclear power. I’m not sure of why MIT is so focused on this issue and continuing to rely on data and projections that are clearly outdated or wrong, but it does have one of the leading departments in nuclear science and engineering. It’s sad to see that such a prestigious institution is allowing its economic self interest to cloud its vision of the future.

What do you see in the future of relying on renewables? Is it economically feasible to build excess renewable capacity that can supply enough storage to run the system the rest of the day? How would the costs of this system compare to nuclear power at actual current costs? Will advanced nuclear power drop costs by 50%? Let us know your thoughts and add any useful references.

Our responsibility to our children

UN-CLIMATE-ENVIRONMENT-GRETA THUNBERG

Greta Thunberg’s speech at the UN has sparked a discussion about our deeper responsibilities to our future generations. When we made the huge effort to fight World War II, did we ask “how much will this cost?” We face the same existential threat and should make the same commitment. We can do this cost effectively, and avoid making most stupid decisions, but asking whether this effort is worth it is now beyond question. We will have to consider how to compensate those who have invested their money or their livelihoods in activities that we now recognize as damaging to the climate, and that will be an added cost to the rest of us. (And we may see this as unfair.) But we really have no choice.

J. Frank Bullit posted on “Fox and Hounds” a sentiment that reflects the core of opposition to such actions:

What if the alarmists are wrong, yet there is no counter to the demands of enacting economic and energy policies we might regret?”

So our energy costs might be a bit more than it would have otherwise, but we get a cleaner environment in exchange. And even now, renewable energy sources are competing well on a dollar to dollar basis.

On the other hand, if the “alarmists” are correct, the consequences have a significant probability of being catastrophic to our civilization, as well as our environment. We all have insurance on our houses for events that we see as highly unlikely. We pay that extra cost on our house to gain assurance that we will recover our investments if such unlikely events occur. These are costs that we are willing to accept because we know that the “alarmists” have a point about the risks of house fires. We should be taking the same attitude towards climate change assessments. It’s not possible to prove that there is no risk, or even that the risk is tiny. And the data trends are sufficiently consistent with the forecasts to date that the probabilities weigh more towards a likelihood than not.

Unless opponents can show that the consequences of the alarmists being wrong are worse than the climate change threat, we have to act to mitigate that risk in much the same way as we do when we buy house insurance. (And by the way, we don’t have another “house” to move to…)

Upfront solar subsidy more cost effective than per kilowatt-hour

Solar_panels_on_house_roof_winter_view

This paper from the American Economic Review found that consumers use a discount rate in excess of 15% in valuing residential solar power credits, compared to a social-wide discount rate of 3%.  The implication is that a government can incent the same amount of solar investment through an upfront credit for as little as half the cost of a per kilowatt-hour ongoing subsidy.

The California Solar Initiative had two different incentive methods, the Performance Based Incentive (PBI) which was paid out over 5 years and the Expected Performance-Based Buydowns (EPBB) paid out upfront. The former was preferred by policy makers but the latter was more popular with homeowners. Now we know the degree of difference in the preference.

U. of Chicago misses mark on evaluating RPS costs

08_us_net_electricity_generation_by_fuel_source_1080_604_80

The U. of Chicago just released a working paper “Do Renewable Portfolio Standards Deliver?” that purports to assess the added costs of renewable portfolio standards adopted by states. The paper has two obvious problems that make the results largely useless for policy development purposes.

First, it’s entirely retrospective and then tries to make conclusions about future actions. The paper ignores that the high initial costs for renewables was driven down by a combination of RPS and other policies (e.g. net energy metering or NEM), and on a going forward basis, the renewables are now cost competitive with conventional resources. As a result, the going forward cost of GHG reductions is much smaller than the historic costs. In fact, the much more interesting question is “what would be the average cost of GHG reductions by moving from the current low penetration rate of renewables to substantially higher levels across the entire U.S., e.g., 50%, 60% etc. to 100%?” The high initial investment costs are then highly diluted by the now cost effective renewables.

Second, the abstract makes this bizarre statement “(t)hese cost estimates significantly exceed the marginal operational costs of renewables and likely reflect costs that renewables impose on the generation system…” Um, the marginal “operational” costs of renewables generally is pretty damn close to zero! Are the authors trying to make the bizarre claim (that I’ve addressed previously) that renewables should be priced at their “marginal operational costs”? This seems to reflect an remarkable naivete on the part of the authors. Based on this incorrect attribution, the authors cannot make any assumptions about what might be causing the rate difference.

Further, the authors appear to attribute the entire difference in rates to imposing an RPS standard. The fact is that these 29 states generally have also been much more active in other efforts to promote renewables, including for customers through NEM and DER rates, and to reduce demand. All of these efforts reduce load, which means that fixed costs are spread over a fewer amount of kilowatt-hours, which then causes rates to rise. The real comparison should be the differences in annual customer bills after accounting for changes in annual demand.

The authors also try to assign stranded cost recovery as a cost of GHG recovery. This is a questionable assignment since these are sunk costs which economists typically ignore. If we are to account for lost investment due to obsolescence of an older technology, economists are going to have go back and redo a whole lot of benefit-cost analyses! The authors would have to explain the special treatment of these costs.

Why do economists keep producing these papers in which they assume the world is static and that the future will be just like the past, even when the evidence of a rapidly changing scene is embedded in the data they are using?

Moving beyond the easy stuff: Mandates or pricing carbon?

figure-1

Meredith Fowlie at the Energy Institute at Haas posted a thought provoking (for economists) blog on whether economists should continue promoting pricing carbon emissions.

I see, however, that this question should be answered in the context of an evolving regulatory and technological process.

Originally, I argued for a broader role for cap & trade in the 2008 CARB AB32 Scoping Plan on behalf of EDF. Since then, I’ve come to believe that a carbon tax is probably preferable over cap & trade when we turn to economy wide strategies for administrative reasons. (California’s CATP is burdensome and loophole ridden.) That said, one of my prime objections at the time to the Scoping Plan was the high expense of mandated measures, and that it left the most expensive tasks to be solved by “the market” without giving the market the opportunity to gain the more efficient reductions.

Fast forward to today, and we face an interesting situation because the cost of renewables and supporting technologies have plummeted. It is possible that within the next five years solar, wind and storage will be less expensive than new fossil generation. (The rest of the nation is benefiting from California initial, if mismanaged, investment.) That makes the effective carbon price negative in the electricity sector. In this situation, I view RPS mandates as correcting a market failure where short term and long term prices do not and cannot converge due to a combination of capital investment requirements and regulatory interventions. The mandates will accelerate the retirement of fossil generation that is not being retired currently due to mispricing in the market. As it is, many areas of the country are on their way to nearly 100% renewable (or GHG-free) by 2040 or earlier.

But this and other mandates to date have not been consumer-facing. Renewables are filtered through the electric utility. Building and vehicle efficiency standards are imposed only on new products and the price changes get lost in all of the other features. Other measures are focused on industry-specific technologies and practices. The direct costs are all well hidden and consumers generally haven’t yet been asked to change their behavior or substantially change what they buy.

But that all would seem to change if we are to take the next step of gaining the much deeper GHG reductions that are required to achieve the more ambitious goals. Consumers will be asked to get out of their gas-fueled cars and choose either EVs or other transportation alternatives. And even more importantly, the heating, cooling, water heating and cooking in the existing building stock will have to be changed out and electrified. (Even the most optimistic forecasts for biogas supplies are only 40% of current fossil gas use.) Consumers will be presented more directly with the costs for those measures. Will they prefer to be told to take specific actions, to receive subsidies in return for higher taxes, or to be given more choice in return for higher direct energy use prices?

The two problems to be addressed head on by nuclear power advocates

6e0c32214e80ee9f4fbabf2e4ffe6fcd

Nuclear power advocates bring up the technology as a supposedly necessary part of a zero-GHG portfolio to address climate change. They insist that the “next generation” technology will be a winner if it is allowed to be developed.

Nevertheless, nuclear has two significant problems beyond whatever is in the next generation technology:

  1. Construction cost overruns are the single biggest liability that has been killing the technology. While most large engineering projects have contingencies for 25-30% overruns, almost all nuclear plants have overruns that are multiples of the original cost estimates. This has been driving the most experienced engineering/construction firms into bankruptcies. Until that problem is resolved, all energy providers should be very leery of making commitments to a technology that takes at least 7 years to build.
  2. We still haven’t addressed waste disposal and storage over the course of decades, much less millennia. No other energy technology presents such a degree of catastrophic failure from a single source. Again, this liability needs to be addressed head on and not ignored or dismissed if the technology is to be pursued.

The Business Roundtable takes the wrong lesson from California’s energy costs

solar-panel-price-drop-global-solar-installations-bnef

The California Business Roundtable authored an article in the San Francisco Chronicle claiming that the we only need to look to California’s energy prices to see what would happen with the “Green New Deal” proposed by the Congressional Democrats.

That article has several errors and is misleading in others aspects. First, California’s electricity rates are high because of the renewable contracts signed nearly a decade ago when renewables were just evolving and much higher cost. California’s investment was part of the reason that solar and wind costs are now lower than existing coals plants (new study shows 75% of coal plants are uneconomic) and competitive with natural gas. Batteries that increase renewable operations have almost become cost effective. It also claims that reliability has “gone down” when in fact we still have a large reserve margin. The California Independent System Operator in fact found a 23% reserve margin when the target is only 17%. We also have the ability to install batteries quickly to solve that issue. PG&E is installing over 500 MW of batteries right now to replace a large natural gas plant.

For the rest of the U.S., consumers will benefit from these lower costs today. Californians have paid too much for their power to date, due to mismanagement by PG&E and the other utilities, but elsewhere will be able to avoid these foibles.

(Graphic: BNEF)