Tag Archives: electricity

Outages highlight the need for a fundamental revision of grid planning

The salience of outages due to distribution problems such as occurred with record heat in the Pacific Northwest and California’s public safety power shutoffs (PSPS) highlights a need for a change in perspective on addressing reliability. In California, customers are 15 times more likely to experience an outage due to distribution issues rather than generation (well, really transmission outages as August 2020 was the first time that California experienced a true generation shortage requiring imposed rolling blackouts—withholding in 2001 doesn’t count.) Even the widespread blackouts in Texas in February 2021 are attributable in large part due to problems beyond just a generation shortage.

Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.

This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.

We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.

What is driving California’s high electricity prices?

This report by Next10 and the University of California Energy Institute was prepared for the CPUC’s en banc hearing February 24. The report compares average electricity rates against other states, and against an estimate of “marginal costs”. (The latter estimate is too low but appears to rely mostly on the E3 Avoided Cost Calculator.) It shows those rates to be multiples of the marginal costs. (PG&E’s General Rate Case workpapers calculates that its rates are about double the marginal costs estimated in that proceeding.) The study attempts to list the reasons why the authors think these rates are too high, but it misses the real drivers on these rate increases. It also uses an incorrect method for calculating the market value of acquisitions and deferred investments, using the current market value instead of the value at the time that the decisions were made.

We can explore the reasons why PG&E’s rates are so high, much of which is applicable to the other two utilities as well. Starting with generation costs, PG&E’s portfolio mismanagement is not explained away with a simple assertion that the utility bought when prices were higher. In fact, PG&E failed in several ways.

First, PG&E knew about the risk of customer exit as early as 2010 as revealed during the PCIA rulemaking hearings in 2018. PG&E continued to procure as though it would be serving its entire service area instead of planning for the rise of CCAs. Further PG&E also was told as early as 2010 (in my GRC testimony) that it was consistently forecasting too high, but it didn’t bother to correct thee error. Instead, service area load is basically at the save level that it was a decade ago.

Second, PG&E could have procured in stages rather than in two large rounds of request for offers (RFOs) which it finished by 2013. By 2011 PG&E should have realized that solar costs were dropping quickly (if they had read the CEC Cost of Generation Report that I managed) and that it should have rolled out the RFOs in a manner to take advantage of that improvement. Further, they could have signed PPAs for the minimum period under state law of 10 years rather than the industry standard 30 years. PG&E was managing its portfolio in the standard practice manner which was foolish in the face of what was occurring.

Third, PG&E failed to offer part of its portfolio for sale to CCAs as they departed until 2018. Instead, PG&E could have unloaded its expensive portfolio in stages starting in 2010. The ease of the recent RPS sales illustrates that PG&E’s claims about creditworthiness and other problems had no foundation.

I calculated the what the cost of PG&E’s mismanagement has been here. While SCE and SDG&E have not faced the same degree of exit by CCAs, the same basic problems exist in their portfolios.

Another factor for PG&E is the fact that ratepayers have paid twice for Diablo Canyon. I explain here how PG&E fully recovered its initial investment costs by 1998, but as part of restructuring got to roll most of its costs back into rates. Fortunately these units retire by 2025 and rates will go down substantially as a result.

In distribution costs, both PG&E and SCE requested over $2 billion for “new growth” in each of its GRCs since 2009, despite my testimony showing that growth was not going to materialize, and did not materialize. If the growth was arising from the addition of new developments, the developers and new customers should have been paying for those additions through the line extension rules that assign that cost responsibility. The utilities’ distribution planning process is opaque. When asked for the workpapers underlying the planning process, both PG&E and SCE responded that the entirety were contained in the Word tables in each of their testimonies. The growth projections had not been reconciled with the system load forecasts until this latest GRC, so the totals of the individual planning units exceeded the projected total system growth (which was too high as well when compared to both other internal growth projections and realized growth). The result is a gross overinvestment in distribution infrastructure with substantial overcapacity in many places.

For transmission, the true incremental cost has not been fully reported which means that other cost-effective solutions, including smaller and closer renewables, have been ignored. Transmission rates have more than doubled over the last decade as a result.

The Next10 report does not appear to reflect the full value of public purpose program spending on energy efficiency, in large part because it uses a short-run estimate of marginal costs. The report similarly underestimates the value of behind-the-meter solar rooftops as well. The correct method for both is to use the market value of deferred resources–generation, transmission and distribution–when those resources were added. So for example, a solar rooftop installed in 2013 was displacing utility scale renewables that cost more than $100 per megawatt-hour. These should not be compared to the current market value of less than $60 per megawatt-hour because that investment was not made on a speculative basis–it was a contract based on embedded utility costs.

Advanced power system modeling need not mean more complex modeling

A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.

But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.

Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.

The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.

We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.

Vegetation maintenance the new “CFL” for wildfire management

PG&E has been aggressively cutting down trees as part of its attempt to mitigate wildfire risk, but those efforts may be creating their own risks. Previously, PG&E has been accused of just focusing numeric targets over effective vegetation management. This situation is reminiscent of how the utilities pursued energy efficiency prior to 2013 with a seemingly single-minded focus on compact fluorescent lights (CFLs). And that focus did not end well, including leading to both environmental degradation and unearned incentives for utilities.

CFLs represented about 20% of the residential energy efficiency program spending in 2009. CFLs were easy for the utilities–they just delivered steeply discounted, or even free, CFLs to stores and they got to count each bulb as an “energy savings.” By 2013, the CPUC ordered the utilities to ramp down spending on CFLs as a new cost-effective technology emerged (LEDs) and the problem of disposing of mercury in the ballasts of CFLs became apparent. But more importantly, it turned out that CFLs were just sitting in closets, creating much fewer savings than estimated. (It didn’t help that CFLs turned out to have a much shorter life than initially estimated as well.) Even so, the utilities were able claim incentives from the California Public Utilities Commission. Ultimately, it became apparent that CFLs were largely a mistake in the state’s energy efficiency portfolio.

Vegetation management seems to be the same “easy number counting” solution that the utilities, particularly PG&E, have adopted. The adverse consequences will be significant and it won’t solve the problem in the long. Its one advantage is that it allows the utilities to maintain their status quo position at the center of the utility network.

Other alternatives include system hardening such as undergrounding or building microgrids in rural communities to allow utilities to deenergize the grid while maintaining local power. The latter option appears to be the most cost effective solution, but it is also the most threatening to the current position of the incumbent utility by giving customers more independence.

CAISO doesn’t quite grasp what led to rolling blackouts

Steve Berberich, CEO of the California Independent System Operator, assessed for GTM  his views on the reasons for the rolling blackouts in the face of a record setting heat wave. He overlooked a key reason for the delay on capacity procurement (called “resource adequacy” or RA) and he demonstrated a lack of understanding of how renewables and batteries will integrate to provide peak capacity.

Berberich is unwilling to acknowledge that at least part of the RA procurement problem was created by CAISO’s unwillingness to step in as a residual buyer in the RA market, which then created resistance by the CCAs to putting the IOUs in that role. RA procurement was delayed at least a year due to CAISO’s reluctance. CAISO appears to be politically tone-deaf to the issues being raised by CCAs on system procurement.

He says that solar will have to be overbuilt to supply energy to batteries for peak load. But that is already the case with the NQC ELCC just a fraction of the installed solar and wind capacity. Renewable capacity above the ELCC is available to charge the batteries for later use. The only question then is how much energy is required from the batteries to support the peak load and is that coming from existing renewables fleet. The resource adequacy paradigm has changed (more akin to the old PNW hydro system) in which energy, not built capacity is becoming the constraint.

Levelized costs are calculated correctly

The Utility Dive recently published an opinion article that claimed that the conventional method of calculating the levelized cost of energy (LCOE) is incorrect. The UD article was derived from an article published in 2019 in the Electricity Journal by the same author, James Loewen. The article claimed that conventional method gave biased results against more capital intensive generation resources such as renewables compared to fossil fueled ones. I wrote a comment to the Electricity Journal showing the errors in Loewen’s reasoning and further reinforcing the rationale for the conventional LCOE calculation. (You have until August 9 to download my article for free.)

I was the managing consultant that assisted the California Energy Commission (CEC) in preparing one of the studies (CEC 2015) referenced in Loewen. I also led the preparation of three earlier studies that updated cost estimates. (CEC 2003, CEC 2007, CEC 2010) In developing these models, the consultants and staff discussed extensively this issue and came to the conclusion that the LCOE must be calculated by discounting both future cashflows and future energy production. Only in this way can a true comparison of discounted energy values be made.

The error in Loewen’s article arises from a misconception that money is somehow different and unique from all other goods and services. Money serves three roles in the economy: as a medium of exchange, as a unit of account, and as a store of value. At its core, money is a commodity used predominantly as an intermediary in the barter economy and as a store of value until needed later. (We can see this particularly when currency was generally backed by a specific commodity–gold.) Discounting derives from the opportunity cost of holding, and not using, that value until a future date. So discounting applies to all resources and services, not just to money.

Blanchard and Fischer (1989) at pp. 70-71, describe how “utility” (which is NOT measured in money) is discounted in economic analysis. Utility is gained by consumption of goods and services. Blanchard and Fischer has an extensive discussion of the marginal rate of substitution between two periods. Again, note there is no discussion of money in this economic analysis–only the consumption of goods and services in two different time periods. That means that goods and services are being discounted directly. The LCOE must be calculated in the same manner to be consistent with economic theory.

We should be able to recover the net present value of project cost by multiplying the LCOE by the generation over the economic life of the project. We only get the correct answer if we use the conventional LCOE.  I walk through the calculation demonstrating this result in the article.

Victory for mobilehome park residents and owners

The California Public Utilities Commission (CPUC) authorized the continuance for the next 10 years of the program that converts ownership of privately-held utility systems in mobilehome parks to that of investor-owned energy utilities, including Pacific Gas & Electric, Southern California Edison, San Diego Gas and Electric and Southern California Gas. Of the 400,000 mobilehome spaces in California, over 300,000 are currently served by “master metered” systems that are owned and maintained by the park owner.

Most of these systems were built more than 40 years ago, although many have been replaced periodically. This program aims to transfer all of these systems to standard utility service. Due to the age of these systems, some engineered to only last a dozen years initially because these parks were intended as “transitional” land uses, concerns about safety have been paramount. This program will bring these systems up to the standards of other California ratepayers.

Along with improved safety, residents will gain greater access to energy efficiency and other energy management programs that they already fund at the utilities, and smoother billing. Residents also will have access to time of use rates that has been precluded by the intervening master meter. Park owners will avoid the increasing complexity of billing, system maintenance and safety inspections and filings, and future costs of system replacement. In addition, park owners have been inadequately compensated through utility rates for maintaining those systems, and have resistance in recovering related costs through rents.

I have been working with one of my clients, Western Manufactured Housing Communities Association (WMA) since 1997 to achieve this goal. The momentum finally shifted in 2014 when we convinced the utilities that making these investments could be profitable. First athree-year pilot program was authorized, and this recent decision builds on that.

 

Should CCAs accept a slice of Diablo Canyon power?

The northern California community choice aggregators (CCAs) are considering a offer from PG&E to allocate to each CCA a proportionate share of parts of its portfolio, including the Diablo Canyon nuclear generation station. Many CCA boards are hearing from anti-nuclear activists to deny this offer, both for moral reasons and the belief that such a rejection will somehow pressure PG&E financially. The first set of concern is beyond my professional expertise, but their reasoning on the economic and regulatory issues is incorrect.

  • CCAs buy a substantial portion of their generation (the majority for many of them) from the California Independent System Operator (CAISO) energy markets. PG&E schedules Diablo Canyon into those CAISO markets and under the current CAISO tariffs, nuclear generation is a “must take” resource that the CAISO can’t turn back. So the entire output of Diablo Canyon is scheduled into the CAISO market (without any bidding process), PG&E is paid the market clearing price (MCP) for that Diablo power, and the CCAs buy that mix of nuclear power at the MCP. There is no discretion for either the CAISO or the CCAs in taking excess power from Diablo. There is no “lifeline” for Diablo that the CCAs have any control over under current legal and regulatory parameters.
  • CCAs already pay for a proportionate share of Diablo Canyon equal to the CCAs share of overall load. That payment is broken into two parts (and maybe a third): 1) the purchase of energy from the CAISO at the MCP and 2) the stranded capital and operating costs above the MCP in the PCIA. (CCAs also may be paying for a share of the resource adequacy, but I haven’t thought through that one.) Thus, if the CCAs receive credit for the energy that they are already paying for, the energy portion essentially comes as “free”. In addition, because CCAs currently pay for the remaining share of Diablo costs, but get no energy credit for that in the PCIA calculation, then that credit is in the PCIA is also “free”. In addition, the CCAs gain credit for Diablo’s GHG-free generation (as recognized in the Air Resources Board GHG allowance program) as LSE’s for no extra cost, or for “free.” The bottom line is when the CCAs gain credit for products that they are already paying for, receipt of those products is for “free.”
  • Accepting this deal will not solve ALL of the CCAs problems, but that’s a false goal. That was never the intent. It does however give the CCAs a respite to get through the period until Diablo retires. One needs to recognize that this provides some of the needed relief.
  • Finally, there’s never any certainty over any large deal. Uncertainty should not freeze decision making. The uncertainty about the PCIA going forward is equally large and perhaps offsetting. The risks should be identified, discussed, considered and addressed to the extent possible. But that’s different than simply nixing the deal without addressing the other large risk. Naively believing that Diablo can be closed in short order (especially with the COVID crisis) is not a true risk management strategy.

From these points, we can come to these conclusions:

  1. Whether the CCAs accept or reject the nuclear offer has NO impact on PG&E’s revenue stream. The decisions that the CCAs face are entirely about whether the CCAs can lower their costs and gain some additional GHG reduction credits that they are already paying for (in other words, reduce their subsidies of bundled customers.) Nothing that the CCAs decide will affect the closure date of Diablo. If the CCAs reject the allocations, it will simply be business as usual to the full closures in 2025. Any other interpretation doesn’t reflect the current regulatory environment at the CPUC which are unlikely to change (and even that is unknown) until enough commissioners’ five-year terms roll over.
  2. The system can only be changed by legislative and regulatory action. That means that the CCAs must make the most prudent financial decisions within the current context rather than making a purely symbolic gesture that is financially adverse and will do nothing to change the BAU practice. A wise decision would consider what is the true impact of the action on
  3. Finally, early closure of Diablo will NOT remove the invested capital cost from PG&E’s ratebase, which is what drives the PCIA. After the plant is closed, activists will ALSO have to show that the INVESTMENT in the plant was imprudent and should not have been allowed. Given the long history on decisions and settlements in Diablo investment costs and the inclusion of recovery of Diablo costs in both AB1890 and AB1X at the beginning and end of the energy crisis, that is an impossible task. Only a constitutional amendment through the initiative process could possibly lead to such an action, and even that would have to survive a court challenge that probably would push past 2024.

I want to finish with what I think is a very important point that has been overlooked by the activists: The effort to close Diablo Canyon has won. Activists might not like the timeline of that victory, but it is a victory nevertheless that looked unachievable prior to 2016. It’s worthwhile considering whether the added effort for what will be for a variety of reasons little gain is an important question to answer.

Note that Diablo Canyon is already scheduled for closure in 2024 and 2025. A proceeding to either reopen A.16-08-006 or to open a new rulemaking or application would probably take close to a year, so the proceeding probably wouldn’t open until almost 2021. The actual proceeding would take up to a year, so now we’re to 2022 before an actual decision. PG&E would have to take up to a year to plan the closure at that point, which then takes us to 2023. So at best the plant closes a year earlier than currently scheduled. In addition, PG&E still receives the full payments for its investments and there’s likely no capital additions avoided by the early closure, so the cost savings would be minimal.

How to choose a water system model

The California Water & Environmental Modeling Forum (CWEMF) has proposed to update its water modeling protocol guidance, last issued in 2000. This modeling protocol applies to many other settings, including electricity production and planning (which I am familiar with). I led the review of electricity system simulation models for the California Energy Commission, and asked many of these questions then.

Questions that should be addressed in water system modeling include:

  • Models can be used for either short-term operational or long term planning purposes—models rarely can serve both masters. The model should be chosen for its analytic focus is on predicting with accuracy and/or precision a particular outcome (usually for short term operations) or identifying resilience and sustainability.
  • There can be a trade off between accuracy and precision. And focusing overly so on precision in one aspect of a model is unlikely to improve the overall accuracy of the model due to the lack of precision elsewhere. In addition, increased precision also increases processing time, thus slowing output and flexibility.
  • A model should be able to produce multiple outcomes quickly as a “scenario generator” for analyzing uncertainty, risk and vulnerability. The model should be tested for accuracy when relaxing key constraints that increase processing time. For example, in an electricity production model, relaxing the unit commitment algorithm increased processing speed twelve fold while losing only 7 percent in accuracy, mostly in the extreme tail cases.
  • Water models should be able to use different water condition sequences rather than relying on historic traces. In the latter case, models may operate as though the future is known with certainty.
  • Water management models should include the full set of opportunity costs for water supply, power generation, flood protection and groundwater pumping. This implies that some type of linkage should exist between these types of models.

We’ve already paid for Diablo Canyon

As I wrote last week, PG&E is proposing that a share of Diablo Canyon nuclear plant output be allocated to community choice aggregators (CCAs) as part of the resolution of issues related to the Integrated Resource Plan (IRP), Resource Adequacy (RA) and Power Charge Indifference Adjustment (PCIA) rulemakings. While the allocation makes sense for CCAs, it does not solve the problem that PG&E ratepayers are paying for Diablo Canyon twice.

In reviewing the second proposed settlement on PG&E costs in 1994, we took a detailed look at PG&E’s costs and revenues at Diablo. Our analysis revealed a shocking finding.

Diablo Canyon was infamous for increasing in cost by more than ten-fold from the initial investment to coming on line. PG&E and ratepayer groups fought over whether to allow $2.3 billion dollars.  The compromise in 1988 was to essentially shift the risk of cost recovery from ratepayers to PG&E through a power purchase agreement modeled on the Interim Standard Offer Number 4 contract offered to qualifying facilities (but suspended as oversubscribed in 1985).

However, the contract terms were so favorable and rich to PG&E, that Diablo costs negatively impacted overall retail rates. These costs were a key contributing factor that caused industrial customers to push for deregulation and restructuring. As an interim solution in 1995 in anticipation of forthcoming restructuring, PG&E and ratepayer groups arrived at a new settlement that moved Diablo Canyon back into PG&E’s regulated ratebase, earning the utilities allowed return on capital. PG&E was allowed to keep 100% of profit collected between 1988 and 1995. The subsequent 1996 settlement made some adjustments but arrived at essentially the same result. (See Decision 97-05-088.)

While PG&E had borne the risks for seven years, that was during the plant startup and its earliest years of operation.  As we’ve seen with San Onofre NGS and other nuclear plants, operational reliability is most at risk late in the life of the plant. PG&E’s originally took on the risk of recovering its entire investment over the entire life of the plant.  The 1995 settlement transferred the risk for recovering costs over the remaining life of the plant back to ratepayers. In addition, PG&E was allowed to roll into rate base the disputed $2.3 billion. This shifted cost recovery back to the standard rate of depreciation over the 40 year life of the NRC license. In other words, PG&E had done an end-run on the original 1988 settlement AND got to keep the excess profits.

The fact that PG&E accelerated its investment recovery over the first seven years and then shifted recovery risk to ratepayers implies that PG&E should be allowed to recover only the amount that it would have earned at a regulated return under the original 1988 settlement. This is equal to the discounted net present value of the net income earned by Diablo Canyon, over both the periods of the 1988 (PPA) and 1995 settlements.

In 1996, we calculated what PG&E should be allowed to recover in the settlement given this premise.  We assumed that PG&E would be allowed to recover the disputed $2.3 billion because it had taken on that risk in 1988, but the net income stream should be discounted at the historic allowed rate of return over the seven year period.  Based on these assumptions, PG&E had recovered its entire $7.7 billion investment by October 1997, just prior to the opening of the restructured market in March 1998.  In other words, PG&E shareholders were already made whole by 1998 as the cost recovery for Diablo was shifted back to ratepayers.  Instead the settlement agreement has caused ratepayers to pay twice for Diablo Canyon.

PG&E has made annual capital additions to continue operation at Diablo Canyon since then and a regulated return is allowed under the regulatory compact.  Nevertheless, the correct method for analyzing the potential loss to PG&E shareholders from closing Diablo is to first subtract $5.1 billion from the plant in service, reducing the current ratebase to capital additions incurred since 1998. This would reduces the sunk costs that are to be recovered in rates from $31 to $3 per megawatt-hour.

Note that PG&E shareholders and bondholders have earned a weighted return of approximately 10% annually on this $5.1 billion since 1998. The compounded present value of that excess return was $18.1 billion by 2014 earned by PG&E.