Tag Archives: electricity rates

Transmission: the hidden cost of generation

The cost of transmission for new generation has become a more salient issue. The CAISO found that distributed generation (DG) had displaced $2.6 billion in transmission investment by 2018. The value of displacing transmission requirements can be determined from the utilities’ filings with FERC and the accounting for new power plant capacity. Using similar methodologies for calculating this cost in California and Kentucky, the incremental cost in both independent system operators (ISO) is $37 per megawatt-hour or 3.7 cents per kilowatt-hour in both areas. This added cost about doubles the cost of utility-scale renewables compared to distributed generation.

When solar rooftop displaces utility generation, particularly during peak load periods, it also displaces the associated transmission that interconnects the plant and transmits that power to the local grid. And because power plants compete with each other for space on the transmission grid, the reduction in bulk power generation opens up that grid to send power from other plants to other customers.

The incremental cost of new transmission is determined by the installation of new generation capacity as transmission delivers power to substations before it is then distributed to customers. This incremental cost represents the long-term value of displaced transmission. This amount should be used to calculate the net benefits for net energy metered (NEM) customers who avoid the need for additional transmission investment by providing local resources rather than remote bulk generation when setting rates for rooftop solar in the NEM tariff.

  • In California, transmission investment additions were collected from the FERC Form 1 filings for 2017 to 2020 for PG&E, SCE and SDG&E. The Wholesale Base Total Revenue Requirements submitted to FERC were collected for the three utilities for the same period. The average fixed charge rate for the Wholesale Base Total Revenue Requirements was 12.1% over that year. That fixed charge rate is applied to the average of the transmission additions to determine the average incremental revenue requirements for new transmission for the period. The plant capacity installed in California for 2017 to 2020 is calculated from the California Energy Commission’s “Annual Generation – Plant Unit”. (This metric is conservative because (1) it includes the entire state while CAISO serves only 80% of the state’s load and the three utilities serve a subset of that, and (2) the list of “new” plants includes a number of repowered natural gas plants at sites with already existing transmission. A more refined analysis would find an even higher incremental transmission cost.)

Based on this analysis, the appropriate marginal transmission cost is $171.17 per kilowatt-year. Applying the average CAISO load factor of 52%, the marginal cost equals $37.54 per megawatt-hour.

  • In Kentucky, Kentucky Power is owned by American Electric Power (AEP) which operates in the PJM ISO. PJM has a market in financial transmission rights (FTR) that values relieving the congestion on the grid in the short term. AEP files network service rates each year with PJM and FERC. The rate more than doubled over 2018 to 2021 at average annual increase of 26%.

Based on the addition of 22,907 megawatts of generation capacity in PJM over that period, the incremental cost of transmission was $196 per kilowatt-year or nearly four times the current AEP transmission rate. This equates to about $37 per megawatt-hour (or 3.7 cents per kilowatt-hour).

A new agricultural electricity use forecast method holds promise for water use management

Agricultural electricity demand is highly sensitive to water availability. Under “normal” conditions, the State Water Project (SWP) and Central Valley Project (CVP), as well as other surface water supplies, are key sources of irrigation water for many California farmers. Under dry conditions, these water sources can be sharply curtailed, even eliminated, at the same time irrigation requirements are heightened. Farmers then must rely more heavily on groundwater, which requires greater energy to pump than surface water, since groundwater must be lifted from deeper depths.

Over extended droughts, like between 2012 to 2016, groundwater levels decline, and must be pumped from ever deeper depths, requiring even more energy to meet crops’ water needs. As a result, even as land is fallowed in response to water scarcity, significantly more energy is required to water remaining crops and livestock. Much less pumping is necessary in years with ample surface water supply, as rivers rise, soils become saturated, and aquifers recharge, raising groundwater levels.

The surface-groundwater dynamic results in significant variations in year-to-year agricultural electricity sales. Yet, PG&E has assigned the agricultural customer class a revenue responsibility based on the assumption that “normal” water conditions will prevail every year, without accounting for how inevitable variations from these circumstances will affect rates and revenues for agricultural and other customers.

This assumption results in an imbalance in revenue collection from the agricultural class that does not correct itself even over long time periods, harming agricultural customers most in drought years, when they can least afford it. Analysis presented presented by M.Cubed on behalf of the Agricultural Energy Consumers Association (AECA) in the 2017 PG&E General Rate Case (GRC) demonstrated that overcollections can be expected to exceed $170 million over two years of typical drought conditions, with the expected overcollection $34 million in a two year period. This collection imbalance also increases rate instability for other customer classes.

Figure-1 compares the difference between forecasted loads for agriculture and system-wide used to set rates in the annual ERRA Forecast proceedings (and in GRC Phase 2 every three years) and the actual recorded sales for 1995 to 2019. Notably, the single largest forecasting error for system-wide load was a sales overestimate of 4.5% in 2000 and a shortfall in 2019 of 3.7%, while agricultural mis-forecasts range from an under-forecast of 39.2% in the midst of an extended drought in 2013 to an over-forecast of 18.2% in one of the wettest years on record in 1998. Load volatility in the agricultural sector is extreme in comparison to other customer classes.

Figure-2 shows the cumulative error caused by inadequate treatment of agricultural load volatility over the last 25 years. An unbiased forecasting approach would reflect a cumulative error of zero over time. The error in PG&E’s system-wide forecast has largely balanced out, even though the utility’s load pattern has shifted from significant growth over the first 10 years to stagnation and even decline. PG&E apparently has been able to adapt its forecasting methods for other classes relatively well over time.

The accumulated error for agricultural sales forecasting tells a different story. Over a quarter century the cumulative error reached 182%, nearly twice the annual sales for the Agricultural class. This cumulative error has consequences for the relative share of revenue collected from agricultural customers compared to other customers, with growers significantly overpaying during the period.

Agricultural load forecasting can be revised to better address how variations in water supply availability drive agricultural load. Most importantly, the final forecast should be constructed from a weighted average of forecasted loads under normal, wet and dry conditions. The forecast of agricultural accounts also must be revamped to include these elements. In addition, the load forecast should include the influence of rates and a publicly available data source on agricultural income such as that provided by the USDA’s Economic Research Service.

The Forecast Model Can Use An Additional Drought Indicator and Forecasted Agricultural Rates to Improve Its Forecast Accuracy

The more direct relationship to determine agricultural class energy needs is between the allocation of surface water via state and federal water projects and the need to pump groundwater when adequate surface water is not available from the SWP and federal CVP. The SWP and CVP are critical to California agriculture because little precipitation falls during the state’s Mediterranean-climate summer and snow-melt runoff must be stored and delivered via aqueducts and canals. Surface water availability, therefore, is the primary determinant of agricultural energy use, while precipitation and related factors, such as drought, are secondary causes in that they are only partially responsible for surface water availability. Other factors such as state and federal fishery protections substantially restrict water availability and project pumping operations greatly limiting surface water deliveries to San Joaquin Valley farms.

We found that the Palmer Drought Stress Index (PDSI) is highly correlated with contract allocations for deliveries through the SWP and CVP, reaching 0.78 for both of them, as shown in Figure AECA-3. (Note that the correlation between the current and lagged PDSI is only 0.34, which indicates that both variables can be included in the regression model.) Of even greater interest and relevance to PG&E’s forecasting approach, the correlation with the previous year’s PDSI and project water deliveries is almost as strong, 0.56 for the SWP and 0.53 for the CVP. This relationship can be seen also in Figure-3, as the PDSI line appears to lead changes in the project water deliveries. This strong relationship with this lagged indicator is not surprising, as both the California Department of Water Resources and U.S. Bureau of Reclamation account for remaining storage and streamflow that is a function of soil moisture and aquifers in the Sierras.

Further, comparing the inverse of water delivery allocations, (i.e., the undelivered contract shares), to the annual agricultural sales, we can see how agricultural load has risen since 1995 as the contract allocations delivered have fallen (i.e., the undelivered amount has risen) as shown in Figure-4. The decline in the contract allocations is only partially related to the amount of precipitation and runoff available. In 2017, which was among the wettest years on record, SWP Contractors only received 85% of their allocations, while the SWP provided 100% every year from 1996 to 1999. The CVP has reached a 100% allocation only once since 2006, while it regularly delivered above 90% prior to 2000. Changes in contract allocations dictated by regulatory actions are clearly a strong driver in the growth of agricultural pumping loads but an ongoing drought appears to be key here. The combination of the forecasted PDSI and the lagged PDSI of the just concluded water year can be used to capture this relationship.

Finally, a “normal” water year rarely occurs, occurring in only 20% of the last 40 years. Over time, the best representation of both surface water availability and the electrical load dependent on it is a weighted average across the probabilities of different water year conditions.

Proposed Revised Agricultural Forecast

We prepared a new agricultural load forecast for 2021 implementing the changes recommended herein. In addition, the forecasted average agricultural rate was added, which was revealed to be statistically valid. The account forecast was developed using most of the same variables as for the sales forecast to reflect similarities in drivers of both sales and accounts.

Figure-5 compares the performance of AECA’s proposed model to PG&E’s model filed in its 2021 General Rate Case. The backcasted values from the AECA model have a correlation coefficient of 0.973 with recorded values,[1] while PG&E’s sales forecast methodology only has a correlation of 0.742.[2] Unlike PG&E’s model almost all of the parameter estimates are statistically valid at the 99% confidence interval, with only summer and fall rainfall being insignificant.[3]

AECA’s accounts forecast model reflects similar performance, with a correlation of 0.976. The backcast and recorded data are compared in Figure-6. For water managers, this chart shows how new groundwater wells are driven by a combination of factors such as water conditions and electricity prices.




Can Net Metering Reform Fix the Rooftop Solar Cost Shift?: A Response

A response to Severin Borenstein’s post at UC Energy Institute where he posits a large subsidy flowing to NEM customers and proposes an income-based fixed charge as the remedy. Borenstein made the same proposal at a later CPUC hearing.

The CPUC is now considering reforming the current net energy metering (NEM) tariffs in the NEM 3.0 proceeding. And the State Legislature is considering imposing a change by fiat in AB 1139.

First, to frame this discussion, economists are universally guilty of status quo bias in which we (since I’m one) too often assume that changing from the current physical and institutional arrangement is a “cost” in an implicit assumption that the current situation was somehow arrived at via a relatively benign economic process. (The debate over reparations for slavery revolve around this issue.) The same is true for those who claim that NEM customers are imposing exorbitant costs on other customers.

There are several issues to be considered in this analysis.

1) In looking at the history of the NEM rate, the emergence of a misalignment between retail rates that compensate solar customers and the true marginal costs of providing service (which are much more than the hourly wholesales price–more on that later) is a recent event. When NEM 1.0 was established residential rates were on the order of 15 c/kWh and renewable power contracts were being signed at 12 to 15 c/kWh. In addition, the transmission costs were adding 2 to 4 c/kWh. This was the case through 2015; NEM 1.0 expired in 2016. NEM 2.0 customers were put on TOU rates with evening peak loads, so their daytime output is being priced at off peak rates midday and they are paying higher on peak rates for usage. This despite the fact that the difference in “marginal costs” between peak and off wholesale costs are generally on the order of a penny per kWh. (PG&E NEM customers also pay a $10/month fixed charge that is close to the service connection cost.) Calculating the net financial flows is more complicated and deserve that complex look than what can be captured in a simple back of the envelope calculation.

2) If we’re going to dig into subsidies, the first place to start is with utility and power plant shareholders. If we use the current set of “market price benchmarks” (which are problematic as I’ll discuss), out of PG&E’s $5.2 billion annual generation costs, over $2 billion or 40% are “stranded costs” that are subsidies to shareholders for bad investments. In an efficient marketplace those shareholders would have to recover those costs through competitively set prices, as Jim Lazar of the Regulatory Assistance Project has pointed out. One might counter those long term contracts were signed on behalf of these customers who now must pay for them. Of course, overlooking whether those contracts were really properly evaluated, that’s also true for customers who have taken energy efficiency measures and Elon Musk as he moves to Texas–we aren’t discussing whether they also deserve a surcharge to cover these costs. But beyond this, on an equity basis, NEM 1.0 customers at least made investments based on an expectation, that the CPUC did not dissuade them of this belief (we have documentation of how at least one county government was mislead by PG&E on this issue in 2016). If IOUs are entitled to financial protection (and the CPUC has failed to enact the portfolio management incentive specified in AB57 in 2002) then so are those NEM customers. If on the other hand we can reopen cost recovery of those poor portfolio management decisions that have led to the incentive for retail customers to try to exit, THEN we can revisit those NEM investments. But until then, those NEM customers are no more subsidized than the shareholders.

3) What is the true “marginal cost”? First we have the problem of temporal consistency between generation vs. transmission and distribution grid (T&D) costs. Economists love looking at generation because there’s a hourly (or subhourly) “short run” price that coincides nicely with economic theory and calculus. On the other hand, those darn T&D costs are lumpy and discontinuous. The “hourly” cost for T&D is basically zero and the annual cost is not a whole lot better. The current methods debated in the General Rate Cases (GRC) relies on aggregating piecemeal investments without looking at changing costs as a whole. Probably the most appropriate metric for T&D is to calculate the incremental change in total costs by the number of new customers. Given how fast utility rates have been rising over the last decade I’m pretty sure that the “marginal cost” per customer is higher than the average cost–in fact by definition marginal costs must be higher. (And with static and falling loads, I’m not even sure how we calculated the marginal costs per kwh. We can derive the marginal cost this way FERC Form 1 data.) So how do we meld one marginal cost that might be on a 5-minute basis with one that is on a multi-year timeframe? This isn’t an easy answer and “rough justice” can cut either way on what’s the truly appropriate approximation.

4) Even if the generation cost is measured sub hourly, the current wholesale markets are poor reflections of those costs. Significant market distortions prevent fully reflecting those costs. Unit commitment costs are often subsidized through out of market payments; reliability regulation forces investment that pushes capacity costs out of the hourly market, added incremental resources–whether for added load such as electrification or to meet regulatory requirements–are largely zero-operating cost renewables of which none rely on hourly market revenues for financial solvency; in California generators face little or no bankruptcy risk which allows them to underprice their bids; on the flip side, capacity price adders such as ERCOT’s ORDC overprices the value of reliability to customers as a backdoor way to allow generators to recover investments through the hourly market. So what is the true marginal cost of generation? Pulling down CAISO prices doesn’t look like a good primary source of data.

We’re left with the question of what is the appropriate benchmark for measuring a “subsidy”? Should we also include the other subsidies that created the problem in the first place?

AB1139 would undermine California’s efforts on climate change

Assembly Bill 1139 is offered as a supposed solution to unaffordable electricity rates for Californians. Unfortunately, the bill would undermine the state’s efforts to reduce greenhouse gas emissions by crippling several key initiatives that rely on wider deployment of rooftop solar and other distributed energy resources.

  • It will make complying with the Title 24 building code requiring solar panel on new houses prohibitively expensive. The new code pushes new houses to net zero electricity usage. AB 1139 would create a conflict with existing state laws and regulations.
  • The state’s initiative to increase housing and improve affordability will be dealt a blow if new homeowners have to pay for panels that won’t save them money.
  • It will make transportation electrification and the Governor’s executive order aiming for 100% new EVs by 2035 much more expensive because it will make it much less economic to use EVs for grid charging and will reduce the amount of direct solar panel charging.
  • Rooftop solar was installed as a long-term resource based on a contractual commitment by the utilities to maintain pricing terms for at least the life of the panels. Undermining that investment will undermine the incentive for consumers to participate in any state-directed conservation program to reduce energy or water use.

If the State Legislature wants to reduce ratepayer costs by revising contractual agreements, the more direct solution is to direct renegotiation of RPS PPAs. For PG&E, these contracts represent more than $1 billion a year in excess costs, which dwarfs any of the actual, if any, subsidies to NEM customers. The fact is that solar rooftops displaced the very expensive renewables that the IOUs signed, and probably led to a cancellation of auctions around 2015 that would have just further encumbered us.

The bill would force net energy metered (NEM) customers to pay twice for their power, once for the solar panels and again for the poor portfolio management decisions by the utilities. The utilities claim that $3 billion is being transferred from customers without solar to NEM customers. In SDG&E’s service territory, the claim is that the subsidy costs other ratepayers $230 per year, which translates to $1,438 per year for each NEM customer. But based on an average usage of 500 kWh per month, that implies each NEM customer is receiving a subsidy of $0.24/kWh compared to an average rate of $0.27 per kWh. In simple terms, SDG&E is claiming that rooftop solar saves almost nothing in avoided energy purchases and system investment. This contrasts with the presumption that energy efficiency improvements save utilities in avoided energy purchases and system investments. The math only works if one agrees with the utilities’ premise that they are entitled to sell power to serve an entire customer’s demand–in other words, solar rooftops shouldn’t exist.

Finally, this initiative would squash a key motivator that has driven enthusiasm in the public for growing environmental awareness. The message from the state would be that we can only rely on corporate America to solve our climate problems and that we can no longer take individual responsibility. That may be the biggest threat to achieving our climate management goals.

ERCOT has the peak period scarcity price too high

The freeze and resulting rolling outages in Texas in February highlighted the unique structure of the power market there. Customers and businesses were left with huge bills that have little to do with actual generation expenses. This is a consequence of the attempt by Texas to fit into an arcane interpretation of an economic principle where generators should be able to recover their investments from sales in just a few hours of the year. Problem is that basic of accounting for those cashflows does not match the true value of the power in those hours.

The Electric Reliability Council of Texas (ERCOT) runs an unusual wholesale electricity market that supposedly relies solely on hourly energy prices to provide the incentives for incenting new generation investment. However, ERCOT is using the same type of administratively-set subsidies to create enough potential revenue to cover investment costs. Further, a closer examination reveals that this price adder is set too high relative to actual consumer value for peak load power. All of this leads to a conclusion relying solely on short-run hourly prices as a proxy for the market value that accrues to new entrants is a misplaced metric.

The total ERCOT market first relies on side payments to cover commitment costs (which creates barriers to entry but that’s a separate issue) and second, it transfers consumer value through to the Operating Reserve Demand Curve (ORDC) that uses a fixed value of lost load (VOLL) in an arbitrary manner to create “opportunity costs” (more on that definition at a later time) so the market can have sufficient scarcity rents. This second price adder is at the core of ERCOT’s incentive system–energy prices alone are insufficient to support new generation investment. Yet ERCOT has ignored basic economics and set this value too high based on both available alternatives to consumers and basic regional budget constraints.

I started with an estimate of the number of hours where prices need the ORDC to be at full VOLL of $9000/MWH to recover the annual revenue requirements of combustion turbine (CT) investment based on the parameters we collected for the California Energy Commission. It turns out to be about 20 to 30 hours per year. Even if the cost in Texas is 30% less, this is still more 15 hours annually, every single year or on average. (That has not been happening in Texas to date.) Note for other independent system operators (ISO) such as the California ISO (CAISO), the price cap is $1,000 to $2,000/MWH.

I then calculated the cost of a customer instead using a home generator to meet load during those hours assuming a life of 10 to 20 years on the generator. That cost should set a cap on the VOLL to residential customers as the opportunity cost for them. The average unit is about $200/kW and an expensive one is about $500/kW. That cost ranges from $3 to $5 per kWh or $3,000 to $5,000/MWH. (If storage becomes more prevalent, this cost will drop significantly.) And that’s for customers who care about periodic outages–most just ride out a distribution system outage of a few hours with no backup. (Of course if I experienced 20 hours a year of outage, I would get a generator too.) This calculation ignores the added value of using the generator for other distribution system outages created by events like a hurricane hitting every few years, as happens in Texas. That drives down this cost even further, making the $9,000/MWH ORDC adder appear even more distorted.

The second calculation I did was to look at the cost of an extended outage. I used the outages during Hurricane Harvey in 2017 as a useful benchmark event. Based on ERCOT and U.S. Energy Information Reports reports, it looks like 1.67 million customers were without power for 4.5 days. Using the Texas gross state product (GSP) of $1.9 trillion as reported by the St. Louis Federal Reserve Bank, I calculated the economic value lost over 4.5 days, assuming a 100% loss, at $1.5 billion. If we assume that the electricity outage is 100% responsible for that loss, the lost economic value per MWH is just under $5,000/MWH. This represents the budget constraint on willingness to pay to avoid an outage. In other words, the Texas economy can’t afford to pay $9,000/MWH.

The recent set of rolling blackouts in Texas provides another opportunity to update this budget constraint calculation in a different circumstance. This can be done by determining the reduction in electricity sales and the decrease in state gross product in the period.

Using two independent methods, I come up with an upper bound of $5,000/MWH, and likely much less. One commentator pointed out that ERCOT would not be able achieve a sufficient planning reserve level at this price, but that statement is based on the premises that short-run hourly prices reflect full market values and will deliver the “optimal” resource mix. Neither is true.

This type of hourly pricing overemphasizes peak load reliability value and undervalues other attributes such as sustainability and resilience. These prices do not reflect the full incremental cost of adding new resources that deliver additional benefits during non-peak periods such as green energy, nor the true opportunity cost that is exercised when a generator is interconnected rather than during later operations. Texas has overbuilt its fossil-fueled generation thanks to this paradigm. It needs an external market based on long-run incremental costs to achieve the necessary environmental goals.

What is driving California’s high electricity prices?

This report by Next10 and the University of California Energy Institute was prepared for the CPUC’s en banc hearing February 24. The report compares average electricity rates against other states, and against an estimate of “marginal costs”. (The latter estimate is too low but appears to rely mostly on the E3 Avoided Cost Calculator.) It shows those rates to be multiples of the marginal costs. (PG&E’s General Rate Case workpapers calculates that its rates are about double the marginal costs estimated in that proceeding.) The study attempts to list the reasons why the authors think these rates are too high, but it misses the real drivers on these rate increases. It also uses an incorrect method for calculating the market value of acquisitions and deferred investments, using the current market value instead of the value at the time that the decisions were made.

We can explore the reasons why PG&E’s rates are so high, much of which is applicable to the other two utilities as well. Starting with generation costs, PG&E’s portfolio mismanagement is not explained away with a simple assertion that the utility bought when prices were higher. In fact, PG&E failed in several ways.

First, PG&E knew about the risk of customer exit as early as 2010 as revealed during the PCIA rulemaking hearings in 2018. PG&E continued to procure as though it would be serving its entire service area instead of planning for the rise of CCAs. Further PG&E also was told as early as 2010 (in my GRC testimony) that it was consistently forecasting too high, but it didn’t bother to correct thee error. Instead, service area load is basically at the save level that it was a decade ago.

Second, PG&E could have procured in stages rather than in two large rounds of request for offers (RFOs) which it finished by 2013. By 2011 PG&E should have realized that solar costs were dropping quickly (if they had read the CEC Cost of Generation Report that I managed) and that it should have rolled out the RFOs in a manner to take advantage of that improvement. Further, they could have signed PPAs for the minimum period under state law of 10 years rather than the industry standard 30 years. PG&E was managing its portfolio in the standard practice manner which was foolish in the face of what was occurring.

Third, PG&E failed to offer part of its portfolio for sale to CCAs as they departed until 2018. Instead, PG&E could have unloaded its expensive portfolio in stages starting in 2010. The ease of the recent RPS sales illustrates that PG&E’s claims about creditworthiness and other problems had no foundation.

I calculated the what the cost of PG&E’s mismanagement has been here. While SCE and SDG&E have not faced the same degree of exit by CCAs, the same basic problems exist in their portfolios.

Another factor for PG&E is the fact that ratepayers have paid twice for Diablo Canyon. I explain here how PG&E fully recovered its initial investment costs by 1998, but as part of restructuring got to roll most of its costs back into rates. Fortunately these units retire by 2025 and rates will go down substantially as a result.

In distribution costs, both PG&E and SCE requested over $2 billion for “new growth” in each of its GRCs since 2009, despite my testimony showing that growth was not going to materialize, and did not materialize. If the growth was arising from the addition of new developments, the developers and new customers should have been paying for those additions through the line extension rules that assign that cost responsibility. The utilities’ distribution planning process is opaque. When asked for the workpapers underlying the planning process, both PG&E and SCE responded that the entirety were contained in the Word tables in each of their testimonies. The growth projections had not been reconciled with the system load forecasts until this latest GRC, so the totals of the individual planning units exceeded the projected total system growth (which was too high as well when compared to both other internal growth projections and realized growth). The result is a gross overinvestment in distribution infrastructure with substantial overcapacity in many places.

For transmission, the true incremental cost has not been fully reported which means that other cost-effective solutions, including smaller and closer renewables, have been ignored. Transmission rates have more than doubled over the last decade as a result.

The Next10 report does not appear to reflect the full value of public purpose program spending on energy efficiency, in large part because it uses a short-run estimate of marginal costs. The report similarly underestimates the value of behind-the-meter solar rooftops as well. The correct method for both is to use the market value of deferred resources–generation, transmission and distribution–when those resources were added. So for example, a solar rooftop installed in 2013 was displacing utility scale renewables that cost more than $100 per megawatt-hour. These should not be compared to the current market value of less than $60 per megawatt-hour because that investment was not made on a speculative basis–it was a contract based on embedded utility costs.

How to increase renewables? Change the PCIA

California is pushing for an increase in renewable generation to power its electrification of buildings and the transportation sector. Yet the state maintains a policy that will impede reaching that goal–the power cost indifference adjustment (PCIA) rate discourages the rapidly growing community choice aggregators (CCAs) from investing directly in new renewable generation.

As I wrote recently, California’s PCIA rate charged as an exit fee on departed customers is distorting the electricity markets in a way that increases the risk of another energy crisis similar to the debacle in 2000 to 2001. An analysis of the California Independent System Operator markets shows that market manipulations similar to those that created that crisis likely led to the rolling blackouts last August. Unfortunately, the state’s energy agencies have chosen to look elsewhere for causes.

The even bigger problem of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.

The portfolios of the incumbent utilities, i.e., Pacific Gas and Electric, Southern California Edison and San Diego Gas and Electric, are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)

The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.

But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.

The best solution is to offer CCAs the opportunity to make a fixed or lump sum exit fee payment based on the market value of the legacy utility’s portfolio at the moment of departure. This would untie the PCIA from variations in the future market prices and CCAs would then be constructing a portfolio that hedges their own risks rather than relying on the implicit hedge embedded in the legacy utility’s portfolio. The legacy utilities also would have to manage their bundled customers’ portfolio without relying on the cross subsidy from departed customers to mitigate that risk.

The PCIA is heading California toward another energy crisis

The California ISO Department of Market Monitoring notes in its comments to the CPUC on proposals to address resource adequacy shortages during last August’s rolling blackouts that the number of fixed price contracts are decreasing. In DMM’s opinion, this leaves California’s market exposed to the potential for greater market manipulation. The diminishing tolling agreements and longer term contracts DMM observes is the result of the structure of the power cost indifference adjustment (PCIA) or “exit fee” for departed community choice aggregation (CCA) and direct access (DA) customers. The IOUs are left shedding contracts as their loads fall.

The PCIA is pegged to short run market prices (even more so with the true up feature added in 2019.) The PCIA mechanism works as a price hedge against the short term market values for assets for CCAs and suppresses the incentives for long-term contracts. This discourages CCAs from signing long-term agreements with renewables.

The PCIA acts as an almost perfect hedge on the retail price for departed load customers because an increase in the CAISO and capacity market prices lead to a commensurate decrease in the PCIA, so the overall retail rate remains the same regardless of where the market moves. The IOUs are all so long on their resources, that market price variation has a relatively small impact on their overall rates.

This situation is almost identical to the relationship of the competition transition charge (CTC) implemented during restructuring starting in 1998. Again, energy service providers (ESPs) have little incentive to hedge their portfolios because the CTC was tied directly to the CAISO/PX prices, so the CTC moved inversely with market prices. Only when the CAISO prices exceeded the average cost of the IOUs’ portfolios did the high prices become a problem for ESPs and their customers.

As in 1998, the solution is to have a fixed, upfront exit fee paid by departing customers that is not tied to variations in future market prices. (Commissioner Jesse Knight’s proposal along this line was rejected by the other commissioners.) By doing so, load serving entities (LSEs) will be left to hedging their own portfolios on their own basis. That will lead to LSEs signing more long term agreements of various kinds.

The alternative of forcing CCAs and ESP to sign fixed price contracts under the current PCIA structure forces them to bear the risk burden of both departed and bundled customers, and the IOUs are able to pass through the risks of their long term agreements through the PCIA.

California would be well service by the DMM to point out this inherent structural problem. We should learn from our previous errors.

PG&E’s bankruptcy—what’s happened and what’s next?

The wildfires that erupted in Sonoma County the night of October 8, 2017 signaled a manifest change not just limited to how we must manage risks, but even to the finances of our basic utility services. Forest fires had been distant events that, while expanding in size over the last several decades, had not impacted where people lived and worked. Southern California had experienced several large-scale fires, and the Oakland fire in 1991 had raced through a large city, but no one was truly ready for what happened that night, including Pacific Gas and Electric. Which is why the company eventually declared bankruptcy.

PG&E had already been punished for its poor management of its natural gas pipeline system after an explosion killed nine in San Bruno in 2010. The company was convicted in federal court, fined $3 million and placed on supervised probation under a judge.

PG&E also has extensive transmission and distribution network with more than 100,000 miles of wires. Over a quarter of that network runs through areas with significant wildfire risk. PG&E already had been charged with starting several forest fires, including the Butte fire in 2015, and its vegetation management program had been called out as inadequate by the California Public Utilities Commission (CPUC) since the 1990s. The  CPUC caught PG&E diverting $495 million from maintenance spending to shareholders from 1992 to 1997; PG&E was fined $29 million. Meanwhile, two other utilities, Southern California Edison (SCE) and San Diego Gas and Electric (SDG&E) had instituted several management strategies to mitigate wildfire risk (not entirely successful), including turning off “line reclosers” during high winds to avoid short circuits on broken lines that can spark fires. PG&E resisted such steps.

On that October night, when 12 fires erupted, PG&E’s equipment contributed to starting 11 of those, and indirectly at least to other. Over 100,000 acres burned, destroying almost 9,000 buildings and killing 44 people. It was the most destructive fire in history, costing over $14 billion.

But PG&E’s problems were not over. The next year in November 2018, an even bigger fire in Butte County, the Camp fire, caused by a failure of a PG&E transmission line. That one burned over 150,000 acres, killing 85, destroying the community of Paradise and costing $16 billion plus. PG&E now faced legal liabilities of over $30 billion, which exceeds PG&E’s invested capital in its system. PG&E was potentially upside down financially.

The State of California had passed Assembly Bill 1054 that provided a fund of $21 billion to cover excess wildfire costs to utilities (including SCE and SDG&E), but it only covered fires after 2018. The Wine Country and Camp fires were not included, so PG&E faced the question of how to pay for these looming costs. Plus PG&E had an additional problem—federal Judge William Alsup supervising its parole stepped in claiming that these fires were a violation of its parole conditions. The CPUC also launched investigations into PG&E’s safety management and potential restructuring of the firm. PG&E faced legal and regulatory consequences on multiple fronts.

PG&E Corp, the holding company, filed for Chapter 11 bankruptcy on January 14, 2019. PG&E had learned from its 2001 bankruptcy proceeding for its utility company subsidiary that moving its legal and regulatory issues into the federal bankruptcy court gave the company much more control over its fate than being in multiple forums. Bankruptcy law afforded the company the ability to force regulators to increase rates to cover the costs authorized through the bankruptcy. And PG&E suffered no real consequences with the 2001 bankruptcy as share prices returned, and even exceeded, pre-filing levels.

As the case progressed, several proposals, some included in legislative bills, were made to take control of PG&E from its shareholders, through a cooperative, a state-owned utility, or splitting it among municipalities. Governor Gavin Newsom even called on Warren Buffet to buy out PG&E. Several localities, including San Francisco, made separate offers to buy their jurisdictions’ grid. The Governor and CPUC made certain demands of PG&E to restructure its management and board of directors, to which PG&E responded in part. PG&E changed its chief executive officer, and its current CEO, Bill Johnson, will resign on June 30. The Governor holds some leverage because he must certify that PG&E has complied by June 30, 2020 with the requirements of Assembly Bill 1054 that authorizes the wildfire cost relief fund for the utilities.

Meanwhile, PG&E implemented a quick fix to its wildfire risk with “public safety power shutoffs” (PSPS), with its first test in October 2019, which did not fare well. PG&E was accused of being excessive in the number of customers (over 800,000) and duration and failing to coordinate adequately with local governments. A subsequent PSPS event went more smoothly, but still had significant problems. PG&E says that such PSPS events will continue for the next decade until it has sufficiently “hardened” its system to mitigate the fire risk. Such mitigation includes putting power lines underground, changing system configuration and installing “microgrids” that can be isolated and self sufficient for short durations. That program likely will cost tens of billions of dollars, potentially increasing rates as much as 50 percent. One question will be who should pay—all ratepayers or those who are being protected in rural areas?

PG&E negotiated several pieces of a settlement, coming to agreements with hedge-fund investors, debt holders, insurance companies that pay for wildfire losses by residents and businesses, and fire victims. The victims are to be paid with a mix of cash and stock, with a face value of $13.5 billion; the victims are voting on whether to accept this agreement as this article is being written. Local governments will receive $1 billion, and insurance companies $11 billion, for a total of $24.5 billion in payouts.  PG&E has lined up $20 billion in outside financing to cover these costs. The total package is expected to raise $58 billion.

The CPUC voted May 28 to approve PG&E’s bankruptcy plan, along with a proposed fine of $2 billion. PG&E would not be able to recover the costs for the 2017 and 2018 fires from ratepayers under the proposed order. The Governor has signaled that he is likely to also approve PG&E’s plan before the June 30 deadline.

PG&E is still asking for significant rate increases to both underwrite the AB 1054 wildfire protection fund and to implement various wildfire mitigation efforts. PG&E has asked for a $900 million interim rate increase for wildfire management efforts and a settlement agreement in its 2020 general rate case calls for another $575 million annual ongoing increase (with larger amounts to be added in the next three years). These amount to a more than 10 percent increase in rates for the coming year, on top of other rate increases for other investments.

And PG&E still faces various legal difficulties. The utility pleaded guilty to 85 chargesof manslaughter in the Camp fire, making the company a two-time felon. The federal judge overseeing the San Bruno case has repeatedly found PG&E’s vegetation management program wanting over the last two years and is considering remedial actions.

Going forward, PG&E’s rates are likely to rise dramatically over the next five years to finance fixes to its system. Until that effort is effective, PSPS events will be widespread, maybe for a decade. On top of that is that electricity demand has dropped precipitously due to the coronavirus pandemic shelter in place orders, which is likely to translate into higher rates as costs are spread over a smaller amount of usage.

Profound proposals in SCE’s rate case

A catastrophic crisis calls for radical solutions that are considered out of the box. This includes asking utility shareholders to share in the the same pain as their customers.

M.Cubed is testifying on Southern California Edison’s 2021 General Rate Case (GRC) on behalf of the Small Business Utility Advocates. Small businesses represent nearly half of California’s economy. A recent survey shows that more than 40% of such firms are closed or will close in the near future. While these businesses struggle, the utilities currently assured a steady income, and SCE is asking for a 20% revenue requirement increase on top already high rates.

In this context, SBUA filed M.Cubed’s testimony on May 5 recommending that the California Public Utilities Commission take the following actions in response to SCE’s application related to commercial customers:

  • Order SCE to withdraw its present application and refile it with updated forecasts (that were filed last August) and assumptions that better fit the changed circumstances caused by the ongoing Covid-19 crisis.
  • Request that California issue a Rate Revenue Reduction bond that can be used to reduce SCE’s rates by 10%. The state did this in 1996 in anticipation of restructuring, and again in 2001 after the energy crisis.
  • Freeze all but essential utility investment. Much of SCE’s proposed increase is for “load growth” that has not materialized in the past, and even less likely now.
  • Require shareholders, rather than ratepayers, to bear the risks of underutilized or cost-ineffective investments.
  • Reduce Edison’s authorized rate-of-return by an amount proportionate to its lower sales until load levels and characteristics return to 2019 levels or demonstrably reach local demand levels at the circuit or substation that justify requested investment as “used and useful.”
  • Enact Covid-19 Commercial Class Economic Develop (ED) and Supply Chain Repatriation rates. These rates should be at least partially funded in part by SCE shareholders.
  • Order Edison to prioritize deployment of beneficial, flexible, distributed energy resources (DER) in-lieu of fixed distribution investments within its grid modernization program. SCE should not be throwing up barriers to this transformation.
  • Order Edison to reconcile its load forecasts for its local “adjustments” with its overall system forecast to avoid systemic over-forecasting, which leads to investment in excess distribution capacity.
  • Order SCE to revise and refile its distribution investment plan to align its load growth planning with the CPUC-adopted load forecasts for resource planning and to shift more funds to the grid modernization functions that focus on facilitating DER deployment specified in SCE’s application.
  • Order an audit of SCE’s spending in other categories to determine if the activities are justified and appropriate cost controls are in place.  A comparison of authorized and actual 2019 capital expenditures found divergences as large as 65% from forecasted spending. The pattern shows that SCE appears to just spend up to its total authorized amount and then justify its spending after the fact.

M.Cubed goes into greater depth on the rationale for each of these recommendations. The CPUC does not offer many forums for these types of proposals, so SBUA has taken the opportunity offered by SCE’s overall revenue requirement request to plunge in.

(image: Steve Cicala, U. of Chicago)