I was interviewed by a Los Angeles Times reporter about the recent power outages in Northern California as result of the wave of storms. Our power went out for 48 hours New Year’s Eve and again for 12 hours the next weekend:
After three days without power during this latest storm series, Davis resident Richard McCann said he’s seriously considering implementing his own microgrid so he doesn’t have to rely on PG&E.
“I’ve been thinking about it,” he said. McCann, whose work focuses on power sector analysis, said his home lost power for about 48 hours beginning New Year’s Eve, then lost it again after Saturday for about 12 hours.
While the storms were severe across the state, McCann said Davis did not see unprecedented winds or flooding, adding to his concerns about the grid’s reliability.
He said he would like to see California’s utilities “distributing the system, so people can be more independent.”
“I think that’s probably a better solution rather than trying to build up stronger and stronger walls around a centralized grid,” McCann said.
Several others were quoted in the article offering microgrids as a solution to the ongoing challenge.
Widespread outages occurred in Woodland and Stockton despite winds not being exceptionally strong beyond recent experience. Given the widespread outages two years ago and the three “blue sky” multi hour outages we had in 2022 (and none during the September heat storm when 5,000 Davis customers lost power), I’m doubtful that PG&E is ready for what’s coming with climate change.
PG&E instead is proposing to invest up to $40 billion in the next eight years to protect service reliability for 4% of their customers via undergrounding wires in the foothills which will raise our rates up to 70% by 2030! There’s an alternative cost effective solution that would be 80% to 95% less sitting before the Public Utilities Commission but unlikely to be approved. There’s another opportunity to head off PG&E and send some of that money towards fixing our local grid coming up this summer under a new state law.
While winds have been strong, they have not been at the 99%+ range of experience that should lead to multiple catastrophic outcomes in short order. And having two major events within a week, plus the outage in December 2020 shows that these are not statistically unusual. We experienced similar fierce winds without such extended outages. Prior to 2020, Davis only experienced two extended outages in the previous two decades in 1998 and 2007. Clearly the lack of maintenance on an aging system has caught up with PG&E. PG&E should reimagine its rural undergrounding program to mitigate wildfire risk to use microgrids instead. That will free up most of the billons it plans to spend on less than 4% of its customer base to instead harden its urban grid.
More calls for keeping Diablo Canyon have coming out in the last month, along with a proposal to match the project with a desalination project that would deliver water to somewhere. (And there has been pushback from opponents.) There are better solutions, as I have written about previously. Unfortunately, those who are now raising this issue missed the details and nuances of the debate in 2016 when the decision was made, and they are not well informed about Diablo’s situation.
One important fact is that it is not clear whether continued operation of Diablo is safe. Unit No. 1 has one of the most embrittled containment vessels in the U.S. that is at risk during a sudden shutdown event.
Another is that the decision would require overriding a State Water Resources Control Board decision that required ending the use of once-through cooling with ocean water. That cost was what led to the closure decision, which was 10 cents per kilowatt-hour at current operational levels and in excess of 12 cents in more likely operations.
As for the proposal to build a desalination plant, pairing one with Diablo would both be overkill and a logistical puzzle. The Carlsbad plant produces 56,000 acre-feet annually for San Diego County Water Agency. The Central Coast where Diablo is located has a State Water Project allocation of 45,000 acre-feet which is not even used fully now. That plant uses 35 MW or 1.6% of Diablo’s output. A plant built to use all of Diablo’s output could produce 3.5 million acre-feet, but the State Water Project would need to be significantly modified to move the water either back to the Central Valley or beyond Santa Barbara to Ventura. All of that adds up to a large cost on top of what is already a costly source of water of $2,500 to $2,800 per acre-foot.
PG&E made exciting announcements about partnerships with GMandFord last week to test using electric vehicles (EVs) for backup power for residential customers. (Ford also announced an initiative to create an open source charging standard.) PG&E also announced an initiative to install circuit breakers that facilitate use of onsite backup power. PG&E is commended for stepping forward to align its corporate strategy with the impending technology wave that could increase consumer energy independence.
I wrote about the promise of EVs in this role (“Electric vehicles as the next smartphone”) when I was struck by Ford’s F-150 Lightning ads last summer and how the consumer segment that buys pickups isn’t what we usually think of as the “EV crowd.” These initiatives could be game changers.
That said, several questions arise about PG&E’s game plan and whether the utility is still planning to hold customers captive:
How does PG&E plan to recover the costs for what are “beyond the meter” devices that typically is outside of what’s allowed? And how are the risks in these investments to be shared between shareholders and ratepayers? Will PG&E get an “authorized” rate of return with default assurances of costs being approved for recovery from ratepayers? How will PG&E be given appropriate incentives on making timely investments with appropriate risk, especially given the utility’s poor track record in acquiring renewable resources?
What will be the relationships between PG&E and the participating auto manufacturers? Will the manufacturers be required to partner with PG&E going forward? Will the manufacturers be foreclosed from offering products and services that would allow customers to exit PG&E’s system through self generation? Will PG&E close out other manufacturers from participating or set up other access barriers that prevent them from offering alternatives?
Delivering PG&E’s “personal microgrid backup power transfer meter device” is a good first step, but it requires disconnecting the solar panels to use, which means that it only support fossil fueled generators and grid-connected batteries. This device needs a switch for the solar panels as well. Further, it appears the device will only be available to customers who participate in PG&E’s Residential Generator and Battery Rebate Program. Can PG&E continue to offer this feature to vendors who offer only fossil-fueled generators? How will PG&E mitigate the local air pollution impacts from using fossil-fueled back up generators (BUGs) for extended periods? (California already has 8,000 megawatts of BUGs.)
How will these measures be integrated with the planned system reinforcements in PG&E’s 2022 Wildfire Mitigation Plan Update to reduce the costs of undergrounding lines? Will PG&E allow these back up sources and devices for customers who are interested in extended energy independence, particularly those who want to ride out a PSPS event?
How will community choice aggregators (CCAs) or other local governments participate? Will communities be able to independently push these options to achieve their climate action and adaptation plan (CAAP) goals?
In the 1990s, California’s industrial customers threatened to build their own self-generation plants and leave the utilities entirely. Escalating generation costs due to nuclear plant cost overruns and too-generous qualifying facilities (QF) contracts had driven up rates, and the technology that made QFs possible also allowed large customers to consider self generating. In response California “restructured” its utility sector to introduce competition in the generation segment and to get the utilities out of that part of the business. Unfortunately the initiative failed, in a big way, and we were left with a hybrid system which some blame for rising rates today.
Those rising rates may be introducing another threat to the utilities’ business model, but it may be more existential this time. A previous blog post described how Pacific Gas & Electric’s 2022 Wildfire Mitigation Plan Update combined with the 2023 General Rate Application could lead to a 50% rate increase from 2020 to 2026. For standard rate residential customers, the average rate could by 41.9 cents per kilowatt-hour.
For an average customer that translates to $2,200 per year per kilowatt of peak demand. Using PG&E’s cost of capital, that implies that an independent self-sufficient microgrid costing $15,250 per kilowatt could be funded from avoiding paying PG&E bills.
The National Renewable Energy Laboratory (NREL) study referenced in this blog estimates that a stand alone residential microgrid with 7 kilowatts of solar paired with a 5 kilowatt / 20 kilowatt-hour battery would cost between $35,000 and $40,000. The savings from avoiding PG&E rates could justify spending $75,000 to $105,000 on such a system, so a residential customer could save up to $70,000 by defecting from the grid. Even if NREL has underpriced and undersized this example system, that is a substantial margin.
This time it’s not just a few large customers with choice thermal demands and electricity needs—this would be a large swath of PG&E’s residential customer class. It would be the customers who are most affluent and most able to pay PG&E’s extraordinary costs. If many of these customers view this opportunity to exit favorably, the utility could truly face a death spiral that encourages even more customers to leave. Those who are left behind will demand more relief in some fashion, but those customers who already defected will not be willing to bail out the company.
In this scenario, what is PG&E’s (or Southern California Edison’s and San Diego Gas & Electric’s) exit strategy? Trying to squeeze current NEM customers likely will only accelerate exit, not stifle it. The recent two-day workshop on affordability at the CPUC avoided discussing how utility investors should share in solving this problem, treating their cost streams as inviolable. The more likely solution requires substantial restructuring of PG&E to lower its revenue requirements, including by reducing income to shareholders.
PG&E released its 2022 Wildfire Mitigation Plan Update (2022 WMPU) That plan calls for $6 billion of capital investment to move 3,600 miles of underground by 2026. This is just over a third of the initial proposed target of 10,000 miles. Based on PG&E’s proposed ramping up, the utility would reach its target by 2030.
One alternative that could better control costs would be to install community and individual microgrids. Microgrids are likely more cost effective and faster means of reducing wildfire risk and saving lives. I wrote about how to evaluate this choice for relative cost effectiveness based on density of load and customers per mile of line.
Microgrids can mitigate wildfire risk by the utility turning off overhead wire service for extended periods, perhaps weeks at a time, during the highest fire risk periods. The advantage of a periodically-islanded microgrid is 1) that the highest fire risk coincides with the most solar generation so providing enough energy is not a problem and 2) the microgrids also can be used during winter storms to better support the local grid and to ride out shorter outages. Customers’ reliability may degrade because they would not have the grid support, but such systems generally have been quite reliable. In fact, reliability may increase because distribution grid outages are about 15 times more likely than system or regional outages.
The important question is whether microgrids can be built much more quickly than undergrounding lines and in particular whether PG&E has the capacity to manage such a buildout at a faster rate? PG&E has the Community Microgrid Enablement Program. The utility was recently authorized to build severalisolatedmicrogrids as an alternative to rebuilding fire-damaged distribution lines to isolated communities. Turning to local governments to manage many different construction projects likely would improve this schedule, like how Caltrans delegates road construction to counties and cities.
Controlling the costs of wildfire mitigation
Based on the current cost of capital this initial undergrounding phase will add $1.6 billion to annual revenue requirements or an additional 8% above today’s level. This would be on top of PG&E request in its 2023 General Rate Case for a 48% increase in distribution rates by 2023 and 78% increase by 2026, and a 31% increase in overall bundled rates by 2023 and 43% by 2026. The 2022 WMPU would take the increase to over 50% by 2026 (and that doesn’t’ include the higher maintenance costs). That means that residential rates would increase from 28.7 cents per kilowatt-hour today (already 21% higher than December 2020) to 36.4 cents in 2026. Building out the full 10,000 miles could lead to another 15% increase on top of all of this.
Turning to the comparison of undergrounding costs to microgrids, these two charts illustrate how to evaluate the opportunities for microgrids to lower these costs. PG&E states the initial cost per mile for undergrounding is $3.75 million, dropping to $2.5 million, or an average of $2.9 million. The first figure looks at community scale microgrids, using National Renewable Energy Laboratory (NREL) estimates. It shows how the cost effectiveness of installing microgrids changes with density of peak loads on a circuit on the vertical axis, cost per kilowatt for a microgrid on the horizontal axis, and each line showing the division where undergrounding is less expensive (above) or microgrids are less expensive (below) based on the cost of undergrounding. As a benchmark, the dotted line shows the average load density in the PG&E system, combined rural and urban. So in average conditions, community microgrids are cheaper regardless of the costs of microgrids or undergrounding.
The second figure looks at individual residential scale microgrids, again using NREL estimates. It shows how the cost effectiveness of installing microgrids changes with customer density on a circuit on the vertical axis, cost per kilowatt for a microgrid on the horizontal axis, and each line showing the division where undergrounding is less expensive (above) or microgrids are less expensive (below). As a benchmark, the dotted line shows the average customer density in the PG&E system, combined rural and urban. Again, residential microgrids are less expensive in most situations, especially as density falls below 75 customers per mile.
Why wasn’t there a similar cry against bailing out PG&E in not one but TWO bankruptcies? Both PG&E and SCE have clearly relied on the belief that they deserve subsidies to continue staying in business. (SCE has ridden along behind PG&E in both cases to gain the spoils.) The focus needs to be on ALL players here if these types of subsidies are to be called out.
“(T)he reactions have largely been about how much subsidy rooftop solar companies in California need in order to stay in business.”
We are monitoring two very different sets of media then. I see much more about the ability of consumers to maintain an ability to gain a modicum of energy independence from large monopolies that compel that those consumers buy their service with no viable escape. I also see a reactions about how this will undermine directly our ability to reduce GHG emissions. This directly conflicts with the CEC’s Title 24 building standards that use rooftop solar to achieve net zero energy and electrification in new homes.
Yes, there are problems with the current compensation model for NEM customers, but we also need to recognize our commitments to customers who made investments believing they were doing the right thing. We need to acknowledge the savings that they created for all of us and the push they gave to lower technology costs. We need to recognize the full set of values that these customers provide and how the current electric market structure is too broken to properly compensate what we want customers to do next–to add more storage. Yet, the real first step is to start at the source of the problem–out of control utility costs that ratepayers are forced to bear entirely.
Last month the California Public Utilities Commission (CPUC) issued a decision in Phase II of the PG&E 2020 General Rate Case that endorsed all but one of my proposals on behalf of the Agricultural Energy Consumers Association (AECA) to better align revenue allocation with a rational approach to using marginal costs. Most importantly the CPUC agreed with my observation that the energy system is changing too rapidly to adopt a permanent set of rate setting principles as PG&E had advocated for. For now, we will continue to explore options as relationships among customers, utilities and other providers evolve.
At the heart of the matter is the economic principle that prices are set most efficiently when they adhere to the marginal cost or the cost of producing the last unit of a good or service. In a “standard” market, marginal costs are usually higher than the average cost so a producing firm generates a profit with each sale. For utilities, this is often not true–the average costs are higher than the marginal costs, so we need a means of allocating those additional costs to ensure that the utilities continue to be viable entities. California uses a “second-best” economic method called “Ramsey pricing” that applies relative marginal costs to serve different customers to allocate revenue responsibility.
I made four key proposals on how to apply marginal cost principles for rate setting purposes:
Proposes an updated agricultural load forecasting method that is more accurate and incorporates only public data and currently known variables that can predict next year’s load more accurately.
Use PCIA exit fee market price benchmarks (MPBs) to give consistent revenue allocation across rate classes and bundled vs departed customers.
Include renewable energy credits (REC) in the marginal energy costs (MEC) to reflect incremental RPS acquisition and consistency with the PCIA MPB.
Use the resource adequacy (RA) MPB for setting the marginal generation capacity cost (MGCC) due to uncertainty about resource type for capacity and for consistency with the PCIA MPB.
Marginal customer access costs (MCAC) should be calculated by using the depreciated replacement cost for existing services (RCNLD), and new services costs added for the new customers added as growth.
PG&E settled with AECA on the first to change its agricultural load forecasting methodology in upcoming proceedings. The CPUC agreed with AECA’s positions on two of the other three (RECs in the MEC, and MCAC). And on the third related to MGCC, the adopted position differed little materially.
The most surprising was the choice to use the RCNLD costs for existing customer connections. The debate over how to calculate the MCAC has raged for three decades. Industrial customers preferred valuing all connections, new and existing, at the cost of new connection using the “real economic carrying cost” (RECC) method. This is most consistent with a simple reading of marginal cost pricing principles. On the other side, residential customer advocates claimed that existing connections were sunk costs and have a value of zero for determining marginal, inventing the “new customer only” (NCO) method. I explained in my testimony that the RECC method fails to account for the reduced value of aging connections, but that those connections have value in the market place through house prices, just as a swimming pool or a bathroom remodel adds value. The diminished value of those connections can be approximated using the depreciation schedules that PG&E applies to determine its capital-related revenue requirements. The CPUC has used the RCNLD method to set the value for the sale of PG&E assets to municipal utilities.
The CPUC agreed with this approach which essentially is a compromise between the RECC and NCO method. The RCNLD acknowledges the fundamental points of both methods–that existing customer connections represent an opportunity value for customers but those connections do not have the same value as new ones.
The saying goes “No good deed goes unpunished.” The California Public Utilities Commission seems to have taken that motto to heart recently, and stands ready to penalize yet another group of customers who answered the clarion call to help solve the state’s problems by radically altering the rules for solar rooftops. Here’s three case studies of recent CPUC actions that undermine incentives for customers to act in the future in response to state initiatives: (1) farmers who invested in response to price incentives, (2) communities that pursued renewables more assertively, and (3) customers who installed solar panels.
Agriculture: Farmers have responded to past time of use (TOU) rate incentives more consistently and enthusiastically than any other customer class. Instead of being rewarded for their consistency, their peak price periods shifted from the afternoon to the early evening. Growers face much more difficulty in avoiding pumping during that latter period.
Since TOU rates were introduced to agricultural customers in the late 1970s, growers have made significant operational changes in response to TOU differentials between peak and off-peak energy prices to minimize their on-peak consumption. These include significant investments in irrigation equipment, storage and conveyance infrastructure and labor deployment rescheduling. The results of these expenditures are illustrated in the figure below, which shows how agricultural loads compare with system-wide load on a peak summer weekday in 2015, contrasting hourly loads to the load at the coincident peak hour. Both the smaller and larger agricultural accounts perform better than a range of representative rate schedules. Most notably agriculture’s aggregate load shape on a summer weekday is inverted relative to system peak, i.e., the highest agricultural loads occur during the lowest system load periods, in contrast with other rate classes.
All other rate schedules shown in the graphic hit their annual peak on the same peak day within the then-applicable peak hours of noon to 6 p.m. In contrast, agriculture electricity demand is less than 80% of its annual peak during those high-load hours, with its daily peak falling outside the peak period. Agriculture’s avoidance of peak hours occurred during the summer agricultural growing season, which coincided with peak system demand—just as the Commission asked customers to do. The Commission could not ask for a better aggregate response to system needs; in contrast to the profiles for all of the other customer groups, agriculture has significantly contributed to shifting the peak to a lower cost evening period.
The significant changes in the peak period price timing and differential that the CPUC adopted increases uncertainty over whether large investments in high water-use efficiency microdrip systems – which typically cost $2,000 per acre–will be financially viable. Microdrip systems have been adopted widely by growers over the last several years—one recent study of tomato irrigation rates in Fresno County could not find any significant quantity of other types of irrigation systems. Such systems can be subject to blockages and leaks that are only detectable at start up in daylight. Growers were able to start overnight irrigation at 6 p.m. under the legacy TOU periods and avoid peak energy use. In addition, workers are able to end their day shortly after 6 p.m. and avoid nighttime accidents. Shifting that load out of the peak period will be much more difficult to do with the peak period ending after sunset.
Contrary to strong Commission direction to incent customers to avoid peak power usage, the shift in TOU periods has served to penalize, and reverse, the great strides the agricultural class has made benefiting the utility system over the last four decades.
Community choice aggregators: CCAs were created, among other reasons, to develop more renewable or “green” power. The state achieved its 2020 target of 33% in large part because of the efforts of CCAs fostered through offerings of 50% and 100% green power to retail customers. CCAs also have offered a range of innovative programs that go beyond the offerings of PG&E, SCE and SDG&E.
Nevertheless, the difficulty of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.
The portfolios of the incumbent utilities are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)
The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.
But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.
Solar net energy metered customers: Distributed solar generation installed under California’s net energy metering (NEM/NEMA) programs has mitigated and even eliminated load and demand growth in areas with established customers. This benefit supports protecting the investments that have been made by existing NEM/NEMA customers. Similarly, NEM/NEMA customers can displace investment in distribution assets. That distribution planners are not considering this impact appropriately is not an excuse for failing to value this benefit. For example, PG&E’s sales fell by 5% from 2010 to 2018 and other utilities had similar declines. Peak loads in the CAISO balancing authority reach their highest point in 2006 and the peak in August 2020 was 6% below that level.
Much of that decrease appears to have been driven by the installation of rooftop solar. The figure above illustrates the trends in CAISO peak loads in the set of top lines and the relationship to added NEM/NEMA installations in the lower corner. It also shows the CEC’s forecast from its 2005 Integrated Energy Policy Report as the top line. Prior to 2006, the CAISO peak was growing at annual rate of 0.97%; after 2006, peak loads have declined at a 0.28% trend. Over the same period, solar NEM capacity grew by over 9,200 megawatts. The correlation factor or “R-squared” between the decline in peak load after 2006 and the incremental NEM additions is 0.93, with 1.0 being perfect correlation. Based on these calculations, NEM capacity has deferred 6,500 megawatts of capacity additions over this period. Comparing the “extreme” 2020 peak to the average conditions load forecast from 2005, the load reduction is over 11,500 megawatts. The obvious conclusion is that these investments by NEM customers have saved all ratepayers both reliability and energy costs while delivering zero-carbon energy.
The CPUC now has before it a rulemaking in which the utilities and some ratepayer advocates are proposing to not only radically reduce the compensation to new NEM/NEMA customers but also to change the terms of the agreements for existing ones.
One of the key principles of providing financial stability is setting prices and rates for long-lived assets such as solar panels and generation plants at the economic value when the investment decision was made to reflect the full value of the assets that would have been acquired otherwise. If that new resource had not been built, either a ratebased generation asset would have been constructed by the utility at a cost that would have been recovered over a standard 30-year period or more likely, additional PPAs would have been signed. Additionally, the utilities’ investments and procurement costs are not subject to retroactive ratemaking under the rule prohibiting such ratemaking and Public Utilities Code Section 728, thus protecting shareholders from any risk of future changes in state or Commission policies.
Utility customers who similarly invest in generation should be afforded at least the same assurances as the utilities with respect to protection from future Commission decisions that may diminish the value of those investments. Moreover, customers do not have the additional assurances of achieving a certain net income so they already face higher risks than utility shareholders for their investments.
Generators are almost universally afforded the ability to recover capital investments based on prices set for multiple years, and often the economic life of their assets. Utilities are able to put investments in ratebase to be recovered at a fixed rate of return plus depreciation over several decades. Third-party generators are able to sign fixed price contracts for 10, 20, and even 40 years. Some merchant generators may choose to sell only into the short-term “hourly” market, but those plants are not committed to selling whenever the CAISO demands so. Generators are only required to do so when they sign a PPA with an assured payment toward investment recovery.
Ratepayers who make investments that benefit all ratepayers over the long term should be offered tariffs that provide a reasonable assurance of recovery of those investments, similar to the PPAs offered to generators. Ratepayers should be able to gain the same assurances as generators who sign long-term PPAs, or even utilities that ratebase their generation assets, that they will not be forced to bear all of the risk of investing of clean self-generation. These ratepayers should have some assurance over the 20-plus year expected life of their generation investment.
The debate over whether to close Diablo Canyon has resurfaced. The California Public Utilities Commission, which support from the Legislature, decided in 2018 to close Diablo by 2025 rather than proceed to relicensing. PG&E applied in 2016 to retire the plant rather than relicense due to the high costs that would make the energy uneconomic. (I advised the Joint CCAs in this proceeding.)
IT’S OK TO CLOSE DIABLO CANYON NUCLEAR PLANT A previous column (by John Mott-Smith) asked whether shutting down the Diablo Canyon nuclear plant is risky business if we don’t know what will replace the electricity it produces. John’s friend Richard McCann offered to answer his question. This is a guest column, written by Richard, a universally respected expert on energy, water and environmental economics.
John Mott-Smith asked several questions about the future of nuclear power and the upcoming closure of PG&E’s Diablo Canyon Power Plant in 2025. His main question is how are we going to produce enough reliable power for our economy’s shift to electricity for cars and heating. The answers are apparent, but they have been hidden for a variety of reasons. I’ve worked on electricity and transportation issues for more than three decades. I began my career evaluating whether to close Sacramento Municipal Utility District’s Rancho Seco Nuclear Generating Station and recently assessed the cost to relicense and continue operations of Diablo after 2025. Looking first at Diablo Canyon, the question turns almost entirely on economics and cost. When the San Onofre Nuclear Generating Station closed suddenly in 2012, greenhouse gas emissions rose statewide the next year, but then continued a steady downward trend. We will again have time to replace Diablo with renewables. Some groups focus on the risk of radiation contamination, but that was not a consideration for Diablo’s closure. Instead, it was the cost of compliance with water quality regulations. The power plant currently uses ocean water for cooling. State regulations required changing to a less impactful method that would have cost several billion dollars to install and would have increased operating costs. PG&E’s application to retire the plant showed the costs going forward would be at least 10 to 12 cents per kilowatt-hour. In contrast, solar and wind power can be purchased for 2 to 10 cents per kilowatt-hour depending on configuration and power transmission. Even if new power transmission costs 4 cents per kilowatt-hour and energy storage adds another 3 cents, solar and wind units cost about 3 cents, which totals at the low end of the cost for Diablo Canyon. What’s even more exciting is the potential for “distributed” energy resources, where generation and power management occurs locally, even right on the customers’ premises rather than centrally at a power plant. Rooftop solar panels are just one example—we may be able to store renewable power practically for free in our cars and trucks. Automobiles are parked 95% of the time, which means that an electric vehicle (EV) could store solar power at home or work during the day and for use at night. When we get to a vehicle fleet that is 100% EVs, we will have more than 30 times the power capacity that we need today. This means that any individual car likely will only have to use 10% of its battery capacity to power a house, leaving plenty for driving the next day. With these opportunities, rooftop and community power projects cost 6 to 10 cents per kilowatt-hour compared with Diablo’s future costs of 10 to 12 cents. Distributed resources add an important local protection as well. These resources can improve reliability and resilience in the face of increasing hazards created by climate change. Disruptions in the distribution wires are the cause of more than 95% of customer outages. With local generation, storage, and demand management, many of those outages can be avoided, and electricity generated in our own neighborhoods can power our houses during extreme events. The ad that ran during the Olympics for Ford’s F-150 Lightning pick-up illustrates this potential. Opposition to this new paradigm comes mainly from those with strong economic interests in maintaining the status quo reliance on large centrally located generation. Those interests are the existing utilities, owners, and builders of those large plants plus the utility labor unions. Unfortunately, their policy choices to-date have led to extremely high rates and necessitate even higher rates in the future. PG&E is proposing to increase its rates by another third by 2024 and plans more down the line. PG&E’s past mistakes, including Diablo Canyon, are shown in the “PCIA” exit fee that [CCA] customers pay—it is currently 20% of the rate. Yolo County created VCEA to think and manage differently than PG&E. There may be room for nuclear generation in the future, but the industry has a poor record. While the cost per kilowatt-hour has gone down for almost all technologies, even fossil-fueled combustion turbines, that is not true for nuclear energy. Several large engineering firms have gone bankrupt due to cost overruns. The global average cost has risen to over 10 cents per kilowatt-hour. Small modular reactors (SMR) may solve this problem, but we have been promised these are just around the corner for two decades now. No SMR is in operation yet. Another problem is management of radioactive waste disposal and storage over the course of decades, or even millennia. Further, reactors fail on a periodic basis and the cleanup costs are enormous. The Fukuyama accident cost Japan $300 to $750 billion. No other energy technology presents such a degree of catastrophic failure. This liability needs to be addressed head on and not ignored or dismissed if the technology is to be pursued.
Agricultural electricity demand is highly sensitive to water availability. Under “normal” conditions, the State Water Project (SWP) and Central Valley Project (CVP), as well as other surface water supplies, are key sources of irrigation water for many California farmers. Under dry conditions, these water sources can be sharply curtailed, even eliminated, at the same time irrigation requirements are heightened. Farmers then must rely more heavily on groundwater, which requires greater energy to pump than surface water, since groundwater must be lifted from deeper depths.
Over extended droughts, like between 2012 to 2016, groundwater levels decline, and must be pumped from ever deeper depths, requiring even more energy to meet crops’ water needs. As a result, even as land is fallowed in response to water scarcity, significantly more energy is required to water remaining crops and livestock. Much less pumping is necessary in years with ample surface water supply, as rivers rise, soils become saturated, and aquifers recharge, raising groundwater levels.
The surface-groundwater dynamic results in significant variations in year-to-year agricultural electricity sales. Yet, PG&E has assigned the agricultural customer class a revenue responsibility based on the assumption that “normal” water conditions will prevail every year, without accounting for how inevitable variations from these circumstances will affect rates and revenues for agricultural and other customers.
This assumption results in an imbalance in revenue collection from the agricultural class that does not correct itself even over long time periods, harming agricultural customers most in drought years, when they can least afford it. Analysis presented presented by M.Cubed on behalf of the Agricultural Energy Consumers Association (AECA) in the 2017 PG&E General Rate Case (GRC) demonstrated that overcollections can be expected to exceed $170 million over two years of typical drought conditions, with the expected overcollection $34 million in a two year period. This collection imbalance also increases rate instability for other customer classes.
Figure-1 compares the difference between forecasted loads for agriculture and system-wide used to set rates in the annual ERRA Forecast proceedings (and in GRC Phase 2 every three years) and the actual recorded sales for 1995 to 2019. Notably, the single largest forecasting error for system-wide load was a sales overestimate of 4.5% in 2000 and a shortfall in 2019 of 3.7%, while agricultural mis-forecasts range from an under-forecast of 39.2% in the midst of an extended drought in 2013 to an over-forecast of 18.2% in one of the wettest years on record in 1998. Load volatility in the agricultural sector is extreme in comparison to other customer classes.
Figure-2 shows the cumulative error caused by inadequate treatment of agricultural load volatility over the last 25 years. An unbiased forecasting approach would reflect a cumulative error of zero over time. The error in PG&E’s system-wide forecast has largely balanced out, even though the utility’s load pattern has shifted from significant growth over the first 10 years to stagnation and even decline. PG&E apparently has been able to adapt its forecasting methods for other classes relatively well over time.
The accumulated error for agricultural sales forecasting tells a different story. Over a quarter century the cumulative error reached 182%, nearly twice the annual sales for the Agricultural class. This cumulative error has consequences for the relative share of revenue collected from agricultural customers compared to other customers, with growers significantly overpaying during the period.
Agricultural load forecasting can be revised to better address how variations in water supply availability drive agricultural load. Most importantly, the final forecast should be constructed from a weighted average of forecasted loads under normal, wet and dry conditions. The forecast of agricultural accounts also must be revamped to include these elements. In addition, the load forecast should include the influence of rates and a publicly available data source on agricultural income such as that provided by the USDA’s Economic Research Service.
The more direct relationship to determine agricultural class energy needs is between the allocation of surface water via state and federal water projects and the need to pump groundwater when adequate surface water is not available from the SWP and federal CVP. The SWP and CVP are critical to California agriculture because little precipitation falls during the state’s Mediterranean-climate summer and snow-melt runoff must be stored and delivered via aqueducts and canals. Surface water availability, therefore, is the primary determinant of agricultural energy use, while precipitation and related factors, such as drought, are secondary causes in that they are only partially responsible for surface water availability. Other factors such as state and federal fishery protections substantially restrict water availability and project pumping operations greatly limiting surface water deliveries to San Joaquin Valley farms.
We found that the Palmer Drought Stress Index (PDSI) is highly correlated with contract allocations for deliveries through the SWP and CVP, reaching 0.78 for both of them, as shown in Figure AECA-3. (Note that the correlation between the current and lagged PDSI is only 0.34, which indicates that both variables can be included in the regression model.) Of even greater interest and relevance to PG&E’s forecasting approach, the correlation with the previous year’s PDSI and project water deliveries is almost as strong, 0.56 for the SWP and 0.53 for the CVP. This relationship can be seen also in Figure-3, as the PDSI line appears to lead changes in the project water deliveries. This strong relationship with this lagged indicator is not surprising, as both the California Department of Water Resources and U.S. Bureau of Reclamation account for remaining storage and streamflow that is a function of soil moisture and aquifers in the Sierras.
Further, comparing the inverse of water delivery allocations, (i.e., the undelivered contract shares), to the annual agricultural sales, we can see how agricultural load has risen since 1995 as the contract allocations delivered have fallen (i.e., the undelivered amount has risen) as shown in Figure-4. The decline in the contract allocations is only partially related to the amount of precipitation and runoff available. In 2017, which was among the wettest years on record, SWP Contractors only received 85% of their allocations, while the SWP provided 100% every year from 1996 to 1999. The CVP has reached a 100% allocation only once since 2006, while it regularly delivered above 90% prior to 2000. Changes in contract allocations dictated by regulatory actions are clearly a strong driver in the growth of agricultural pumping loads but an ongoing drought appears to be key here. The combination of the forecasted PDSI and the lagged PDSI of the just concluded water year can be used to capture this relationship.
Finally, a “normal” water year rarely occurs, occurring in only 20% of the last 40 years. Over time, the best representation of both surface water availability and the electrical load dependent on it is a weighted average across the probabilities of different water year conditions.
Proposed Revised Agricultural Forecast
We prepared a new agricultural load forecast for 2021 implementing the changes recommended herein. In addition, the forecasted average agricultural rate was added, which was revealed to be statistically valid. The account forecast was developed using most of the same variables as for the sales forecast to reflect similarities in drivers of both sales and accounts.
Figure-5 compares the performance of AECA’s proposed model to PG&E’s model filed in its 2021 General Rate Case. The backcasted values from the AECA model have a correlation coefficient of 0.973 with recorded values, while PG&E’s sales forecast methodology only has a correlation of 0.742. Unlike PG&E’s model almost all of the parameter estimates are statistically valid at the 99% confidence interval, with only summer and fall rainfall being insignificant.
AECA’s accounts forecast model reflects similar performance, with a correlation of 0.976. The backcast and recorded data are compared in Figure-6. For water managers, this chart shows how new groundwater wells are driven by a combination of factors such as water conditions and electricity prices.