Tag Archives: distribution planning

The scale economy myth of electric utilities

Vibrant Clean Energy released a study showing that inclusion of large amounts of distributed energy resources (DERs) can lower the costs of achieving 100% renewable energy. Commentors here have criticized the study for several reasons, some with reference to the supposed economies of scale of the grid.

While economies of scale might hold for individual customers in the short run, the data I’ve been evaluating for the PG&E and SCE general rate cases aren’t necessarily consistent with that notion. I’ve already discussed here the analysis I conducted in both the CAISO and PJM systems that show marginal transmission costs that are twice the current transmission rates. The rapid rise in those rates over the last decade are consistent with this finding. If economies of scale did hold for the transmission network, those rates should be stable or falling.

On the distribution side, the added investment reported in those two utilities’ FERC Form 1 are not consistent with the marginal costs used in the GRC filings. For example the added investment reported in Form 1 for final service lines (transmission, services, meters or TSM) appears to be almost 10 times larger than what is implied by the marginal costs and new customers in the GRC filings. And again the average cost of distribution is rising while energy and peak loads have been flat across the CAISO area since 2006. The utilities have repeatedly asked for $2 billion each GRC for “growth” in distribution, but given the fact that load has been flat (and even declining in 2019 and 2020), that means there’s likely a significant amount of stranded distribution infrastructure. If that incremental investment is for replacement (which is not consistent with either their depreciation schedules or their assertions about the true life of their facilties and the replacement costs within their marginal cost estimates), then they are grossly underestimating the future replacement cost for facilities which means they are underestimating the true marginal costs.

I can see a future replacement liability right outside my window. The electric poles were installed by PG&E 60+ years ago and the poles are likely reaching the end of their lives. I can see the next step moving to undergrounding the lines at a cost of $15,000 to $25,000 per house based on the ongoing mobilehome conversion program and the typical Rule 20 undergrounding project. Deferring that cost is a valid DER value. We will have to replace many services over the next several decades. And that doesn’t address the higher voltage parts of the system.

We have a counterexample of a supposed monopoly in the cable/internet system. I have at least two competing options where I live. The cell phone network also turned out not to be a natural monopoly. In an area where the PG&E and Merced ID service territories overlap, there are parallel distribution systems. The claim of a “natural monopoly” more likely is a legal fiction that protects the incumbent utility and is simpler for local officials to manage when awarding franchises.

If the claim of natural monopolies in electricity were true, then the distribution rate components for SCE and PG&E should be much lower than for smaller munis such as Palo Alto or Alameda. But that’s not the case. The cost advantages for SMUD and Roseville are larger than can be simply explained by differences in cost of capital. The Division/Office of Ratepayer Advocates critiqued a study by Christensen Associates for PG&E’s 1999 GRC that showed that the optimal utility size was about 500,000 customers. (PG&E’s witness Ken Train who was a professor at UC Berkeley inadvertently confirmed the results and Commissioner Richard Bilas, a Ph.D. economist, noted this in his proposed decision which was never adopted because it was short circuited by restructuring.) Given that finding, that means that the true marginal cost of a customer and associated infrastructure is higher than the average cost. The likely counterbalancing cause is an organizational diseconomy of scale that overwhelms the technological benefits of size.

Finally, generation no longer shows the economies of scale that dominated the industry. The modularity of combined cycle plants and the efficiency improvement of CTs started the industry down the rode toward the efficiency of “smallness.” Solar plants are similarly modular. The reason why additional solar generation appears so low cost is because much of that is from adding another set of panels to an existing plant while avoiding additional transmission interconnection costs (which is the lion’s share of the costs that create what economies of scale do exist.)

The VCE analysis looks a holistic long term analysis. It relies on long run marginal costs, not the short run MCs that will never converge on the LRMC due to the attributes of the electricity system as it is regulated. The study should be evaluated in that context.

Outages highlight the need for a fundamental revision of grid planning

The salience of outages due to distribution problems such as occurred with record heat in the Pacific Northwest and California’s public safety power shutoffs (PSPS) highlights a need for a change in perspective on addressing reliability. In California, customers are 15 times more likely to experience an outage due to distribution issues rather than generation (well, really transmission outages as August 2020 was the first time that California experienced a true generation shortage requiring imposed rolling blackouts—withholding in 2001 doesn’t count.) Even the widespread blackouts in Texas in February 2021 are attributable in large part due to problems beyond just a generation shortage.

Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.

This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.

We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.

Public takeover of PG&E isn’t going to solve every problem

This article in the Los Angeles Times about what a public takeover of PG&E appears to take on uses the premise that such a step would lead to lower costs, more efficiencies and reduced wildfire risks. These expectations have never been realistic, and shouldn’t be the motivation for such an action. Instead, a public takeover would offer these benefits and opportunities:

  • While the direct costs of constructing and repairing the grid would likely be about the same (and PG&E has some of the highest labor costs around), the cost to borrow and invest the needed funds would be as much as 30% less. That’s because PG&E weighted average cost of capital (debt and shareholder equity) is around 8% per annum while muncipal debt is 5% or less.
  • Ratepayers are already repaying shareholders and creditors for their investments in the utility system. Buying PG&E’s system would simply be replacing those payments with payments to creditors that hold public bonds. Similar to the cost of fixing the grid, this purchase should reduce the annual cost to repay that debt by 30%.
  • And along these lines, utility shareholders have borne little of the costs from these types of risks. Shareholders supposedly get a premium on their investment returns for these “risks” but when asked for examples of large scale disallowances, none of the utilities could provide significant examples. If ratepayers are already bearing all of those risks, then they should get all of the investment benefits as well.
  • Direct public oversight will eliminate a layer of regulation that PG&E has used to impede effective oversight and deflect responsibility. To some extent regulation by the California Public Utilities Commission has been like pushing on a string, with PG&E doing what it wants by “interpreting” CPUC decisions. The result has been a series of missteps by the utility over many decades.
  • A new utility structure may provide an opportunity to renegotiate a number of overly lucrative renewable power purchase agreements that PG&E signed between 2010 and 2015. PG&E failed to properly manage the risk profile of its portfolio because under state law it could pass through all costs of those PPAs once approved by the CPUC. PG&E’s shareholders bore no risk, so why consider that risk? There are several possible options to addressing this issue, but PG&E has little incentive to act.
  • A publicly-owned utility can work more closely with local governments to facilitate the evolution of the energy system to meet climate change challenges. As a private entity with restrictions on how it can participate in customer-side energy management, PG&E cannot work hand-in-glove with cities and counties on building and transportation transformation. PG&E right now has strong incentives to prevent further defections away from its grid; public utilities are more likely to accept these defections with the possibility that the stranded asset costs will be socialized.

The risks of wildfire damages and liabilities are unlikely to change substantially (except if the last point accelerates distributed energy resource investment). But the other benefits and opportunities are likely to make these costs lower.

Microgrids could cost 10% of undergrounding PG&E’s wires

One proposed solution to reducing wildfire risk is for PG&E to put its grid underground. There are a number of problems with undergrounding including increased maintenance costs, seismic and flooding risks, and problems with excessive heat (including exploding underground vaults). But ignoring those issues, the costs could be exorbitant-greater than anyone has really considered. An alternative is shifting rural service to microgrids. A high-level estimate shows that using microgrids instead could cost less than 10% of undergrounding the lines in regions at risk. The CPUC is considering a policy shift to promote this type of solution and has new rulemaking on promoting microgrids.

We can put this in context by estimating costs from PG&E’s data provided in its 2020 General Rate Case, and comparing that to its total revenue requirements. That will give us an estimate of the rate increase needed to fund this effort.

PG&E has about 107,000 miles of distribution voltage wires and 18,500 in transmission lines. PG&E listed 25,000 miles of distribution lines being in wildfire risk zones. The the risk is proportionate for transmission this is another 4,300 miles. PG&E has estimated that it would cost $3 million per mile to underground (and ignoring the higher maintenance and replacement costs). And undergrounding transmission can cost as much as $80 million per mile. Using estimates provided to the CAISO and picking the midpoint cost adder of four to ten times for undergrounding, we can estimate $25 million per mile for transmission is reasonable. Based on these estimates it would cost $75 billion to underground distribution and $108 billion for transmission, for a total cost of $183 billion. Using PG&E’s current cost of capital, that translates into annual revenue requirement of $9.1 billion.

PG&E’s overall annual revenue requirement are currently about $14 billion and PG&E has asked for increases that could add another $3 billion. Adding $9.1 billion would add two-thirds (~67%) to PG&E’s overall rates that include both distribution and generation. It would double distribution rates.

This begs two questions:

  1. Is this worth doing to protect properties in the affected urban-wildlands interface (UWI)?
  2. Is there a less expensive option that can achieve the same objective?

On the first question, if we look the assessed property value in the 15 counties most likely to be at risk (which includes substantial amounts of land outside the UWI), the total assessed value is $462 billion. In other words, we would be spending 16% of the value of the property being protected. The annual revenue required would increase property taxed by over 250%, going from 0.77% to 2.0%.

Which turns us to the second question. If we assume that the load share is proportionate to the share of lines at risk, PG&E serves about 18,500 GWh in those areas. The equivalent cost per unit for undergrounding would be $480 per MWh.

The average cost for a microgrid in California based on a 2018 CEC study is $3.5 million per megawatt. That translates to $60 per MWh for a typical load factor. In other words a microgrid could cost one-eighth of undergrounding. The total equivalent cost compared to the undergrounding scenario would be $13 billion. This translates to an 8% increase in PG&E rates.

To what extent should we pursue undergrounding lines versus shifting to microgrid alternatives in the WUI areas? Should we encourage energy independence for these customers if they are on microgrids? How should we share these costs–should locals pay or should they be spread over the entire customer base? Who should own these microgrids: PG&E or CCAs or a local government?

 

 

 

 

Commentary on CPUC Rate Design Workshop

cartoon

The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.

Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.

Carl Linvill from the Regulatory Assistance Project (RAP) made a contrasting presentation that incorporated those differences in temporal perspectives for marginal costs.

Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.

Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.

Repost: Lessons From 40 Years of Electricity Market Transformation: Storage Is Coming Faster Than You Think | Greentech Media

Five useful insights into where the electricity industry is headed.

Source: Lessons From 40 Years of Electricity Market Transformation: Storage Is Coming Faster Than You Think | Greentech Media

Study shows investment and reliability are disconnected

Lawrence Berkeley National Laboratory released a study on how utility investment in transmission and distribution compares to changes in reliability. LBNL found that outages are increasing in number and duration nationally, and that levels of investment are not well correlated with improved reliability.

We testified on behalf of the Agricultural Energy Consumers Association in both the SCE and PG&E General Rate Cases about how distribution investment is not justified by the available data. Both utilities asked for $2 billion to meet “growth” yet both have seen falling demand since 2007. PG&E invested $360 million in its Cornerstone Improvement program, but a good question is, what is the cost-effectiveness of that improved reliability? Perhaps the new distribution resource planning exercise will redirect investment in a more rationale way.

Is the Future of Electricity Generation Really Distributed?

Severin Borenstein at UC Energy Institute blogs about the push for distributed solar, perhaps at the expense of other cost-effective renewables development. My somewhat contrary comment on that is here: https://energyathaas.wordpress.com/2015/05/04/is-the-future-of-electricity-generation-really-distributed/#comment-8092

Will “optimal location” become the next “least-cost best-fit”?

At the CPUC’s first workshop on distribution planning, the buzz word that came up in almost every presentation was “optimal location.” But what does “optimal location” mean? From who’s perspective? Over what time horizon? Who decides? The parties gave hints of where they stand and they are probably far apart.

Paul De Martini gave an overview of the technical issues that the rulemaking can address, but I discussed earlier, there’s a set of institutional matters that also must be addressed. Public comment came back repeatedly to these questions of:  who should be allowed into the emerging market with what roles, and how will this OIR be integrated with the multitude of other planning proceedings at the CPUC? I’ll leave a discussion of those topics to another blog.

The more salient question is defining “optimal location.” I’m sure that it sounded good to legislators when they passed AB 327, but as with many other undefined terms in the law, the devil is in the details. “Least cost-best fit” for evaluating new generation resources similarly sounds like “mom and apple pie” but has become almost meaningless in application at the CPUC in the LTPP and RPS proceedings. Least cost best fit has just led to frustration for both many developers of innovative or flexible renewables such as solar thermal and geothermal, and for the utilities who want these resources.

SCE and SDG&E were quite clear about how they saw optimal location would be chosen: the utility distribution planners would centrally plan the best locations and tell customers. Exactly HOW they would communicate these choices was vague.

Many asked how project developers and customers might know where to find those optimal locations among the utilities’ data. Jamie Fine of EDF might have had the best analogy. He said he now lives in a house that clearly needs a new paint job, so painters drop flyers on his doorstep and not on his neighbors who’s paint is not peeling. Fine asked, “when will the utilities show us where the paint is peeling in their distribution systems?” His and others’ questions call out for a GIS tool that be publicly viewed, maybe along the view of the ICF tool recently presented.

I can think of a number of issues that will affect choices of optimal locations, many of them outside of what a utility planner might consider. The theme of these choices is that it becomes a decentralized process made up of individual decisions just as we have in the rest of the U.S. market place.

  • Differences in distributed energy resource characteristics, e.g., solar vs. bioenergy;
  • Regional socio-economic characteristics to assess fairness and equity;
  • Amount of stranded investment affected;
  • The activities and energy uses both of the host site, neighboring co-users/generators, and surrounding environs;
  • Differences in valuation of reliability by different customers;
  • Interaction with local government plans such as achieving climate action goals under SB 375.
  • Opportunities for new development compared to retrofitting or replacing existing infrastructure.

In such a complex world, the utilities won’t be able to make a set of locational decisions across their service territory simply because they won’t be able to comprehend this entire set of decision factors. It’s the unwieldly nature of complex economies that brings down central planning–it’s great in theory, but unworkable in practice. The utilities can only provide a set of parameters that describe a subset of the optimal location decisions. State and local governments will provide another subset. Businesses and developers yet another set and finally customers will likely be the final arbiters if the new electricity market is to thrive.

As a final note, opening up information about the distribution system (which the utilities have jealously guarded for decades) offers an opportunity to better target other programs as well such as energy efficiency and the California Solar Initiative. Why should we waste money on air conditioning upgrades in San Francisco when they are much more needed in Bakersfield? The CPUC has an opportunity to step away from a moribund model in more than distribution planning if it pursues this to its natural conclusion.