Category Archives: Energy innovation

Emerging technologies and institutional change to meet new challenges while satisfying consumer tastes

What is driving California’s high electricity prices?

This report by Next10 and the University of California Energy Institute was prepared for the CPUC’s en banc hearing February 24. The report compares average electricity rates against other states, and against an estimate of “marginal costs”. (The latter estimate is too low but appears to rely mostly on the E3 Avoided Cost Calculator.) It shows those rates to be multiples of the marginal costs. (PG&E’s General Rate Case workpapers calculates that its rates are about double the marginal costs estimated in that proceeding.) The study attempts to list the reasons why the authors think these rates are too high, but it misses the real drivers on these rate increases. It also uses an incorrect method for calculating the market value of acquisitions and deferred investments, using the current market value instead of the value at the time that the decisions were made.

We can explore the reasons why PG&E’s rates are so high, much of which is applicable to the other two utilities as well. Starting with generation costs, PG&E’s portfolio mismanagement is not explained away with a simple assertion that the utility bought when prices were higher. In fact, PG&E failed in several ways.

First, PG&E knew about the risk of customer exit as early as 2010 as revealed during the PCIA rulemaking hearings in 2018. PG&E continued to procure as though it would be serving its entire service area instead of planning for the rise of CCAs. Further PG&E also was told as early as 2010 (in my GRC testimony) that it was consistently forecasting too high, but it didn’t bother to correct thee error. Instead, service area load is basically at the save level that it was a decade ago.

Second, PG&E could have procured in stages rather than in two large rounds of request for offers (RFOs) which it finished by 2013. By 2011 PG&E should have realized that solar costs were dropping quickly (if they had read the CEC Cost of Generation Report that I managed) and that it should have rolled out the RFOs in a manner to take advantage of that improvement. Further, they could have signed PPAs for the minimum period under state law of 10 years rather than the industry standard 30 years. PG&E was managing its portfolio in the standard practice manner which was foolish in the face of what was occurring.

Third, PG&E failed to offer part of its portfolio for sale to CCAs as they departed until 2018. Instead, PG&E could have unloaded its expensive portfolio in stages starting in 2010. The ease of the recent RPS sales illustrates that PG&E’s claims about creditworthiness and other problems had no foundation.

I calculated the what the cost of PG&E’s mismanagement has been here. While SCE and SDG&E have not faced the same degree of exit by CCAs, the same basic problems exist in their portfolios.

Another factor for PG&E is the fact that ratepayers have paid twice for Diablo Canyon. I explain here how PG&E fully recovered its initial investment costs by 1998, but as part of restructuring got to roll most of its costs back into rates. Fortunately these units retire by 2025 and rates will go down substantially as a result.

In distribution costs, both PG&E and SCE requested over $2 billion for “new growth” in each of its GRCs since 2009, despite my testimony showing that growth was not going to materialize, and did not materialize. If the growth was arising from the addition of new developments, the developers and new customers should have been paying for those additions through the line extension rules that assign that cost responsibility. The utilities’ distribution planning process is opaque. When asked for the workpapers underlying the planning process, both PG&E and SCE responded that the entirety were contained in the Word tables in each of their testimonies. The growth projections had not been reconciled with the system load forecasts until this latest GRC, so the totals of the individual planning units exceeded the projected total system growth (which was too high as well when compared to both other internal growth projections and realized growth). The result is a gross overinvestment in distribution infrastructure with substantial overcapacity in many places.

For transmission, the true incremental cost has not been fully reported which means that other cost-effective solutions, including smaller and closer renewables, have been ignored. Transmission rates have more than doubled over the last decade as a result.

The Next10 report does not appear to reflect the full value of public purpose program spending on energy efficiency, in large part because it uses a short-run estimate of marginal costs. The report similarly underestimates the value of behind-the-meter solar rooftops as well. The correct method for both is to use the market value of deferred resources–generation, transmission and distribution–when those resources were added. So for example, a solar rooftop installed in 2013 was displacing utility scale renewables that cost more than $100 per megawatt-hour. These should not be compared to the current market value of less than $60 per megawatt-hour because that investment was not made on a speculative basis–it was a contract based on embedded utility costs.

Drawing too many conclusions about electric vehicles from an obsolete data set

The Energy Institute at Haas at the University of California published a study allegedly showing that electric vehicles are driven about only one-third of the average standard car in California. I responded with a response on the blog.

Catherine Wolfram writes, “But, we do not see any detectable changes in our results from 2014 to 2017, and some of the same factors were at play over this time period. This makes us think that newer data might not be dramatically different, but we don’t know.“

A recent study likely is delivering a biased estimate of future EV use. The timing of this study reminds me of trying to analyze cell phone use in the mid-2000s. Now household land lines are largely obsolete, and we use phones even more than we did then. The period used for the analysis was during a dramatically changing period more akin to solar panel evolution just before and after 2010, before panels were ubiquitous. We can see this evolution here for example. Comparing the Nissan Leaf, we can see that the range has increased 50% between the 2018 and 2021 models.

The primary reason why this data set is seeing such low mileage is because is almost certain that the vast majority of the households in the survey also have a standard ICE vehicle that they use for their extended trips. There were few or no remote fast charge stations during that time and even Tesla’s had limited range in comparison. In addition, it’s almost certain that EV households were concentrated in urban households that have a comparatively low VMT. (Otherwise, why do studies show that these same neighborhoods have low GHG emissions on average?) Only about one-third of VMT is associated with commuting, another third with errands and tasks and a third with travel. There were few if any SUV EVs that would be more likely to be used for errands, and EVs have been smaller vehicles until recently.

As for copurchased solar panel installation, these earlier studies found that 40% or more of EV owners have solar panels, and solar rooftop penetration has grown faster than EV adoption since these were done.

I’m also not sure that the paper has captured fully workplace and parking structure charging. The logistical challenges of gaining LCFS credits could be substantial enough for employers and municipalities to not bother. This assumption requires a closer analysis of which entities are actually claiming these credits.

A necessary refinement is to compare this data to the typical VMT for these types of households, and to compare the mileage for model types. Smaller commuter models average less annual VMT according to the California Energy Commission’s vehicle VMT data set derived from the DMV registration file and the Air Resources Board’s EMFAC model. The Energy Institute analysis arrives at the same findings that EV studies in the mid 1990s found with less robust technology. That should be a flag that something is amiss in the results.

How to increase renewables? Change the PCIA

California is pushing for an increase in renewable generation to power its electrification of buildings and the transportation sector. Yet the state maintains a policy that will impede reaching that goal–the power cost indifference adjustment (PCIA) rate discourages the rapidly growing community choice aggregators (CCAs) from investing directly in new renewable generation.

As I wrote recently, California’s PCIA rate charged as an exit fee on departed customers is distorting the electricity markets in a way that increases the risk of another energy crisis similar to the debacle in 2000 to 2001. An analysis of the California Independent System Operator markets shows that market manipulations similar to those that created that crisis likely led to the rolling blackouts last August. Unfortunately, the state’s energy agencies have chosen to look elsewhere for causes.

The even bigger problem of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.

The portfolios of the incumbent utilities, i.e., Pacific Gas and Electric, Southern California Edison and San Diego Gas and Electric, are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)

The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.

But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.

The best solution is to offer CCAs the opportunity to make a fixed or lump sum exit fee payment based on the market value of the legacy utility’s portfolio at the moment of departure. This would untie the PCIA from variations in the future market prices and CCAs would then be constructing a portfolio that hedges their own risks rather than relying on the implicit hedge embedded in the legacy utility’s portfolio. The legacy utilities also would have to manage their bundled customers’ portfolio without relying on the cross subsidy from departed customers to mitigate that risk.

The PCIA is heading California toward another energy crisis

The California ISO Department of Market Monitoring notes in its comments to the CPUC on proposals to address resource adequacy shortages during last August’s rolling blackouts that the number of fixed price contracts are decreasing. In DMM’s opinion, this leaves California’s market exposed to the potential for greater market manipulation. The diminishing tolling agreements and longer term contracts DMM observes is the result of the structure of the power cost indifference adjustment (PCIA) or “exit fee” for departed community choice aggregation (CCA) and direct access (DA) customers. The IOUs are left shedding contracts as their loads fall.

The PCIA is pegged to short run market prices (even more so with the true up feature added in 2019.) The PCIA mechanism works as a price hedge against the short term market values for assets for CCAs and suppresses the incentives for long-term contracts. This discourages CCAs from signing long-term agreements with renewables.

The PCIA acts as an almost perfect hedge on the retail price for departed load customers because an increase in the CAISO and capacity market prices lead to a commensurate decrease in the PCIA, so the overall retail rate remains the same regardless of where the market moves. The IOUs are all so long on their resources, that market price variation has a relatively small impact on their overall rates.

This situation is almost identical to the relationship of the competition transition charge (CTC) implemented during restructuring starting in 1998. Again, energy service providers (ESPs) have little incentive to hedge their portfolios because the CTC was tied directly to the CAISO/PX prices, so the CTC moved inversely with market prices. Only when the CAISO prices exceeded the average cost of the IOUs’ portfolios did the high prices become a problem for ESPs and their customers.

As in 1998, the solution is to have a fixed, upfront exit fee paid by departing customers that is not tied to variations in future market prices. (Commissioner Jesse Knight’s proposal along this line was rejected by the other commissioners.) By doing so, load serving entities (LSEs) will be left to hedging their own portfolios on their own basis. That will lead to LSEs signing more long term agreements of various kinds.

The alternative of forcing CCAs and ESP to sign fixed price contracts under the current PCIA structure forces them to bear the risk burden of both departed and bundled customers, and the IOUs are able to pass through the risks of their long term agreements through the PCIA.

California would be well service by the DMM to point out this inherent structural problem. We should learn from our previous errors.

Advanced power system modeling need not mean more complex modeling

A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.

But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.

Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.

The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.

We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.

Vegetation maintenance the new “CFL” for wildfire management

PG&E has been aggressively cutting down trees as part of its attempt to mitigate wildfire risk, but those efforts may be creating their own risks. Previously, PG&E has been accused of just focusing numeric targets over effective vegetation management. This situation is reminiscent of how the utilities pursued energy efficiency prior to 2013 with a seemingly single-minded focus on compact fluorescent lights (CFLs). And that focus did not end well, including leading to both environmental degradation and unearned incentives for utilities.

CFLs represented about 20% of the residential energy efficiency program spending in 2009. CFLs were easy for the utilities–they just delivered steeply discounted, or even free, CFLs to stores and they got to count each bulb as an “energy savings.” By 2013, the CPUC ordered the utilities to ramp down spending on CFLs as a new cost-effective technology emerged (LEDs) and the problem of disposing of mercury in the ballasts of CFLs became apparent. But more importantly, it turned out that CFLs were just sitting in closets, creating much fewer savings than estimated. (It didn’t help that CFLs turned out to have a much shorter life than initially estimated as well.) Even so, the utilities were able claim incentives from the California Public Utilities Commission. Ultimately, it became apparent that CFLs were largely a mistake in the state’s energy efficiency portfolio.

Vegetation management seems to be the same “easy number counting” solution that the utilities, particularly PG&E, have adopted. The adverse consequences will be significant and it won’t solve the problem in the long. Its one advantage is that it allows the utilities to maintain their status quo position at the center of the utility network.

Other alternatives include system hardening such as undergrounding or building microgrids in rural communities to allow utilities to deenergize the grid while maintaining local power. The latter option appears to be the most cost effective solution, but it is also the most threatening to the current position of the incumbent utility by giving customers more independence.

CAISO doesn’t quite grasp what led to rolling blackouts

Steve Berberich, CEO of the California Independent System Operator, assessed for GTM  his views on the reasons for the rolling blackouts in the face of a record setting heat wave. He overlooked a key reason for the delay on capacity procurement (called “resource adequacy” or RA) and he demonstrated a lack of understanding of how renewables and batteries will integrate to provide peak capacity.

Berberich is unwilling to acknowledge that at least part of the RA procurement problem was created by CAISO’s unwillingness to step in as a residual buyer in the RA market, which then created resistance by the CCAs to putting the IOUs in that role. RA procurement was delayed at least a year due to CAISO’s reluctance. CAISO appears to be politically tone-deaf to the issues being raised by CCAs on system procurement.

He says that solar will have to be overbuilt to supply energy to batteries for peak load. But that is already the case with the NQC ELCC just a fraction of the installed solar and wind capacity. Renewable capacity above the ELCC is available to charge the batteries for later use. The only question then is how much energy is required from the batteries to support the peak load and is that coming from existing renewables fleet. The resource adequacy paradigm has changed (more akin to the old PNW hydro system) in which energy, not built capacity is becoming the constraint.

Levelized costs are calculated correctly

The Utility Dive recently published an opinion article that claimed that the conventional method of calculating the levelized cost of energy (LCOE) is incorrect. The UD article was derived from an article published in 2019 in the Electricity Journal by the same author, James Loewen. The article claimed that conventional method gave biased results against more capital intensive generation resources such as renewables compared to fossil fueled ones. I wrote a comment to the Electricity Journal showing the errors in Loewen’s reasoning and further reinforcing the rationale for the conventional LCOE calculation. (You have until August 9 to download my article for free.)

I was the managing consultant that assisted the California Energy Commission (CEC) in preparing one of the studies (CEC 2015) referenced in Loewen. I also led the preparation of three earlier studies that updated cost estimates. (CEC 2003, CEC 2007, CEC 2010) In developing these models, the consultants and staff discussed extensively this issue and came to the conclusion that the LCOE must be calculated by discounting both future cashflows and future energy production. Only in this way can a true comparison of discounted energy values be made.

The error in Loewen’s article arises from a misconception that money is somehow different and unique from all other goods and services. Money serves three roles in the economy: as a medium of exchange, as a unit of account, and as a store of value. At its core, money is a commodity used predominantly as an intermediary in the barter economy and as a store of value until needed later. (We can see this particularly when currency was generally backed by a specific commodity–gold.) Discounting derives from the opportunity cost of holding, and not using, that value until a future date. So discounting applies to all resources and services, not just to money.

Blanchard and Fischer (1989) at pp. 70-71, describe how “utility” (which is NOT measured in money) is discounted in economic analysis. Utility is gained by consumption of goods and services. Blanchard and Fischer has an extensive discussion of the marginal rate of substitution between two periods. Again, note there is no discussion of money in this economic analysis–only the consumption of goods and services in two different time periods. That means that goods and services are being discounted directly. The LCOE must be calculated in the same manner to be consistent with economic theory.

We should be able to recover the net present value of project cost by multiplying the LCOE by the generation over the economic life of the project. We only get the correct answer if we use the conventional LCOE.  I walk through the calculation demonstrating this result in the article.

PG&E’s bankruptcy—what’s happened and what’s next?

The wildfires that erupted in Sonoma County the night of October 8, 2017 signaled a manifest change not just limited to how we must manage risks, but even to the finances of our basic utility services. Forest fires had been distant events that, while expanding in size over the last several decades, had not impacted where people lived and worked. Southern California had experienced several large-scale fires, and the Oakland fire in 1991 had raced through a large city, but no one was truly ready for what happened that night, including Pacific Gas and Electric. Which is why the company eventually declared bankruptcy.

PG&E had already been punished for its poor management of its natural gas pipeline system after an explosion killed nine in San Bruno in 2010. The company was convicted in federal court, fined $3 million and placed on supervised probation under a judge.

PG&E also has extensive transmission and distribution network with more than 100,000 miles of wires. Over a quarter of that network runs through areas with significant wildfire risk. PG&E already had been charged with starting several forest fires, including the Butte fire in 2015, and its vegetation management program had been called out as inadequate by the California Public Utilities Commission (CPUC) since the 1990s. The  CPUC caught PG&E diverting $495 million from maintenance spending to shareholders from 1992 to 1997; PG&E was fined $29 million. Meanwhile, two other utilities, Southern California Edison (SCE) and San Diego Gas and Electric (SDG&E) had instituted several management strategies to mitigate wildfire risk (not entirely successful), including turning off “line reclosers” during high winds to avoid short circuits on broken lines that can spark fires. PG&E resisted such steps.

On that October night, when 12 fires erupted, PG&E’s equipment contributed to starting 11 of those, and indirectly at least to other. Over 100,000 acres burned, destroying almost 9,000 buildings and killing 44 people. It was the most destructive fire in history, costing over $14 billion.

But PG&E’s problems were not over. The next year in November 2018, an even bigger fire in Butte County, the Camp fire, caused by a failure of a PG&E transmission line. That one burned over 150,000 acres, killing 85, destroying the community of Paradise and costing $16 billion plus. PG&E now faced legal liabilities of over $30 billion, which exceeds PG&E’s invested capital in its system. PG&E was potentially upside down financially.

The State of California had passed Assembly Bill 1054 that provided a fund of $21 billion to cover excess wildfire costs to utilities (including SCE and SDG&E), but it only covered fires after 2018. The Wine Country and Camp fires were not included, so PG&E faced the question of how to pay for these looming costs. Plus PG&E had an additional problem—federal Judge William Alsup supervising its parole stepped in claiming that these fires were a violation of its parole conditions. The CPUC also launched investigations into PG&E’s safety management and potential restructuring of the firm. PG&E faced legal and regulatory consequences on multiple fronts.

PG&E Corp, the holding company, filed for Chapter 11 bankruptcy on January 14, 2019. PG&E had learned from its 2001 bankruptcy proceeding for its utility company subsidiary that moving its legal and regulatory issues into the federal bankruptcy court gave the company much more control over its fate than being in multiple forums. Bankruptcy law afforded the company the ability to force regulators to increase rates to cover the costs authorized through the bankruptcy. And PG&E suffered no real consequences with the 2001 bankruptcy as share prices returned, and even exceeded, pre-filing levels.

As the case progressed, several proposals, some included in legislative bills, were made to take control of PG&E from its shareholders, through a cooperative, a state-owned utility, or splitting it among municipalities. Governor Gavin Newsom even called on Warren Buffet to buy out PG&E. Several localities, including San Francisco, made separate offers to buy their jurisdictions’ grid. The Governor and CPUC made certain demands of PG&E to restructure its management and board of directors, to which PG&E responded in part. PG&E changed its chief executive officer, and its current CEO, Bill Johnson, will resign on June 30. The Governor holds some leverage because he must certify that PG&E has complied by June 30, 2020 with the requirements of Assembly Bill 1054 that authorizes the wildfire cost relief fund for the utilities.

Meanwhile, PG&E implemented a quick fix to its wildfire risk with “public safety power shutoffs” (PSPS), with its first test in October 2019, which did not fare well. PG&E was accused of being excessive in the number of customers (over 800,000) and duration and failing to coordinate adequately with local governments. A subsequent PSPS event went more smoothly, but still had significant problems. PG&E says that such PSPS events will continue for the next decade until it has sufficiently “hardened” its system to mitigate the fire risk. Such mitigation includes putting power lines underground, changing system configuration and installing “microgrids” that can be isolated and self sufficient for short durations. That program likely will cost tens of billions of dollars, potentially increasing rates as much as 50 percent. One question will be who should pay—all ratepayers or those who are being protected in rural areas?

PG&E negotiated several pieces of a settlement, coming to agreements with hedge-fund investors, debt holders, insurance companies that pay for wildfire losses by residents and businesses, and fire victims. The victims are to be paid with a mix of cash and stock, with a face value of $13.5 billion; the victims are voting on whether to accept this agreement as this article is being written. Local governments will receive $1 billion, and insurance companies $11 billion, for a total of $24.5 billion in payouts.  PG&E has lined up $20 billion in outside financing to cover these costs. The total package is expected to raise $58 billion.

The CPUC voted May 28 to approve PG&E’s bankruptcy plan, along with a proposed fine of $2 billion. PG&E would not be able to recover the costs for the 2017 and 2018 fires from ratepayers under the proposed order. The Governor has signaled that he is likely to also approve PG&E’s plan before the June 30 deadline.

PG&E is still asking for significant rate increases to both underwrite the AB 1054 wildfire protection fund and to implement various wildfire mitigation efforts. PG&E has asked for a $900 million interim rate increase for wildfire management efforts and a settlement agreement in its 2020 general rate case calls for another $575 million annual ongoing increase (with larger amounts to be added in the next three years). These amount to a more than 10 percent increase in rates for the coming year, on top of other rate increases for other investments.

And PG&E still faces various legal difficulties. The utility pleaded guilty to 85 chargesof manslaughter in the Camp fire, making the company a two-time felon. The federal judge overseeing the San Bruno case has repeatedly found PG&E’s vegetation management program wanting over the last two years and is considering remedial actions.

Going forward, PG&E’s rates are likely to rise dramatically over the next five years to finance fixes to its system. Until that effort is effective, PSPS events will be widespread, maybe for a decade. On top of that is that electricity demand has dropped precipitously due to the coronavirus pandemic shelter in place orders, which is likely to translate into higher rates as costs are spread over a smaller amount of usage.

Profound proposals in SCE’s rate case

A catastrophic crisis calls for radical solutions that are considered out of the box. This includes asking utility shareholders to share in the the same pain as their customers.

M.Cubed is testifying on Southern California Edison’s 2021 General Rate Case (GRC) on behalf of the Small Business Utility Advocates. Small businesses represent nearly half of California’s economy. A recent survey shows that more than 40% of such firms are closed or will close in the near future. While these businesses struggle, the utilities currently assured a steady income, and SCE is asking for a 20% revenue requirement increase on top already high rates.

In this context, SBUA filed M.Cubed’s testimony on May 5 recommending that the California Public Utilities Commission take the following actions in response to SCE’s application related to commercial customers:

  • Order SCE to withdraw its present application and refile it with updated forecasts (that were filed last August) and assumptions that better fit the changed circumstances caused by the ongoing Covid-19 crisis.
  • Request that California issue a Rate Revenue Reduction bond that can be used to reduce SCE’s rates by 10%. The state did this in 1996 in anticipation of restructuring, and again in 2001 after the energy crisis.
  • Freeze all but essential utility investment. Much of SCE’s proposed increase is for “load growth” that has not materialized in the past, and even less likely now.
  • Require shareholders, rather than ratepayers, to bear the risks of underutilized or cost-ineffective investments.
  • Reduce Edison’s authorized rate-of-return by an amount proportionate to its lower sales until load levels and characteristics return to 2019 levels or demonstrably reach local demand levels at the circuit or substation that justify requested investment as “used and useful.”
  • Enact Covid-19 Commercial Class Economic Develop (ED) and Supply Chain Repatriation rates. These rates should be at least partially funded in part by SCE shareholders.
  • Order Edison to prioritize deployment of beneficial, flexible, distributed energy resources (DER) in-lieu of fixed distribution investments within its grid modernization program. SCE should not be throwing up barriers to this transformation.
  • Order Edison to reconcile its load forecasts for its local “adjustments” with its overall system forecast to avoid systemic over-forecasting, which leads to investment in excess distribution capacity.
  • Order SCE to revise and refile its distribution investment plan to align its load growth planning with the CPUC-adopted load forecasts for resource planning and to shift more funds to the grid modernization functions that focus on facilitating DER deployment specified in SCE’s application.
  • Order an audit of SCE’s spending in other categories to determine if the activities are justified and appropriate cost controls are in place.  A comparison of authorized and actual 2019 capital expenditures found divergences as large as 65% from forecasted spending. The pattern shows that SCE appears to just spend up to its total authorized amount and then justify its spending after the fact.

M.Cubed goes into greater depth on the rationale for each of these recommendations. The CPUC does not offer many forums for these types of proposals, so SBUA has taken the opportunity offered by SCE’s overall revenue requirement request to plunge in.

(image: Steve Cicala, U. of Chicago)