Tag Archives: distribution planning

Microgrids could cost 10% of undergrounding PG&E’s wires

One proposed solution to reducing wildfire risk is for PG&E to put its grid underground. There are a number of problems with undergrounding including increased maintenance costs, seismic and flooding risks, and problems with excessive heat (including exploding underground vaults). But ignoring those issues, the costs could be exorbitant-greater than anyone has really considered. An alternative is shifting rural service to microgrids. A high-level estimate shows that using microgrids instead could cost less than 10% of undergrounding the lines in regions at risk. The CPUC is considering a policy shift to promote this type of solution and has new rulemaking on promoting microgrids.

We can put this in context by estimating costs from PG&E’s data provided in its 2020 General Rate Case, and comparing that to its total revenue requirements. That will give us an estimate of the rate increase needed to fund this effort.

PG&E has about 107,000 miles of distribution voltage wires and 18,500 in transmission lines. PG&E listed 25,000 miles of distribution lines being in wildfire risk zones. The the risk is proportionate for transmission this is another 4,300 miles. PG&E has estimated that it would cost $3 million per mile to underground (and ignoring the higher maintenance and replacement costs). And undergrounding transmission can cost as much as $80 million per mile. Using estimates provided to the CAISO and picking the midpoint cost adder of four to ten times for undergrounding, we can estimate $25 million per mile for transmission is reasonable. Based on these estimates it would cost $75 billion to underground distribution and $108 billion for transmission, for a total cost of $183 billion. Using PG&E’s current cost of capital, that translates into annual revenue requirement of $9.1 billion.

PG&E’s overall annual revenue requirement are currently about $14 billion and PG&E has asked for increases that could add another $3 billion. Adding $9.1 billion would add two-thirds (~67%) to PG&E’s overall rates that include both distribution and generation. It would double distribution rates.

This begs two questions:

  1. Is this worth doing to protect properties in the affected urban-wildlands interface (UWI)?
  2. Is there a less expensive option that can achieve the same objective?

On the first question, if we look the assessed property value in the 15 counties most likely to be at risk (which includes substantial amounts of land outside the UWI), the total assessed value is $462 billion. In other words, we would be spending 16% of the value of the property being protected. The annual revenue required would increase property taxed by over 250%, going from 0.77% to 2.0%.

Which turns us to the second question. If we assume that the load share is proportionate to the share of lines at risk, PG&E serves about 18,500 GWh in those areas. The equivalent cost per unit for undergrounding would be $480 per MWh.

The average cost for a microgrid in California based on a 2018 CEC study is $3.5 million per megawatt. That translates to $60 per MWh for a typical load factor. In other words a microgrid could cost one-eighth of undergrounding. The total equivalent cost compared to the undergrounding scenario would be $13 billion. This translates to an 8% increase in PG&E rates.

To what extent should we pursue undergrounding lines versus shifting to microgrid alternatives in the WUI areas? Should we encourage energy independence for these customers if they are on microgrids? How should we share these costs–should locals pay or should they be spread over the entire customer base? Who should own these microgrids: PG&E or CCAs or a local government?

 

 

 

 

Commentary on CPUC Rate Design Workshop

cartoon

The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.

Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.

Carl Linvill from the Regulatory Assistance Project (RAP) made a contrasting presentation that incorporated those differences in temporal perspectives for marginal costs.

Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.

Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.

Repost: Lessons From 40 Years of Electricity Market Transformation: Storage Is Coming Faster Than You Think | Greentech Media

Five useful insights into where the electricity industry is headed.

Source: Lessons From 40 Years of Electricity Market Transformation: Storage Is Coming Faster Than You Think | Greentech Media

Study shows investment and reliability are disconnected

Lawrence Berkeley National Laboratory released a study on how utility investment in transmission and distribution compares to changes in reliability. LBNL found that outages are increasing in number and duration nationally, and that levels of investment are not well correlated with improved reliability.

We testified on behalf of the Agricultural Energy Consumers Association in both the SCE and PG&E General Rate Cases about how distribution investment is not justified by the available data. Both utilities asked for $2 billion to meet “growth” yet both have seen falling demand since 2007. PG&E invested $360 million in its Cornerstone Improvement program, but a good question is, what is the cost-effectiveness of that improved reliability? Perhaps the new distribution resource planning exercise will redirect investment in a more rationale way.

Is the Future of Electricity Generation Really Distributed?

Severin Borenstein at UC Energy Institute blogs about the push for distributed solar, perhaps at the expense of other cost-effective renewables development. My somewhat contrary comment on that is here: https://energyathaas.wordpress.com/2015/05/04/is-the-future-of-electricity-generation-really-distributed/#comment-8092

Will “optimal location” become the next “least-cost best-fit”?

At the CPUC’s first workshop on distribution planning, the buzz word that came up in almost every presentation was “optimal location.” But what does “optimal location” mean? From who’s perspective? Over what time horizon? Who decides? The parties gave hints of where they stand and they are probably far apart.

Paul De Martini gave an overview of the technical issues that the rulemaking can address, but I discussed earlier, there’s a set of institutional matters that also must be addressed. Public comment came back repeatedly to these questions of:  who should be allowed into the emerging market with what roles, and how will this OIR be integrated with the multitude of other planning proceedings at the CPUC? I’ll leave a discussion of those topics to another blog.

The more salient question is defining “optimal location.” I’m sure that it sounded good to legislators when they passed AB 327, but as with many other undefined terms in the law, the devil is in the details. “Least cost-best fit” for evaluating new generation resources similarly sounds like “mom and apple pie” but has become almost meaningless in application at the CPUC in the LTPP and RPS proceedings. Least cost best fit has just led to frustration for both many developers of innovative or flexible renewables such as solar thermal and geothermal, and for the utilities who want these resources.

SCE and SDG&E were quite clear about how they saw optimal location would be chosen: the utility distribution planners would centrally plan the best locations and tell customers. Exactly HOW they would communicate these choices was vague.

Many asked how project developers and customers might know where to find those optimal locations among the utilities’ data. Jamie Fine of EDF might have had the best analogy. He said he now lives in a house that clearly needs a new paint job, so painters drop flyers on his doorstep and not on his neighbors who’s paint is not peeling. Fine asked, “when will the utilities show us where the paint is peeling in their distribution systems?” His and others’ questions call out for a GIS tool that be publicly viewed, maybe along the view of the ICF tool recently presented.

I can think of a number of issues that will affect choices of optimal locations, many of them outside of what a utility planner might consider. The theme of these choices is that it becomes a decentralized process made up of individual decisions just as we have in the rest of the U.S. market place.

  • Differences in distributed energy resource characteristics, e.g., solar vs. bioenergy;
  • Regional socio-economic characteristics to assess fairness and equity;
  • Amount of stranded investment affected;
  • The activities and energy uses both of the host site, neighboring co-users/generators, and surrounding environs;
  • Differences in valuation of reliability by different customers;
  • Interaction with local government plans such as achieving climate action goals under SB 375.
  • Opportunities for new development compared to retrofitting or replacing existing infrastructure.

In such a complex world, the utilities won’t be able to make a set of locational decisions across their service territory simply because they won’t be able to comprehend this entire set of decision factors. It’s the unwieldly nature of complex economies that brings down central planning–it’s great in theory, but unworkable in practice. The utilities can only provide a set of parameters that describe a subset of the optimal location decisions. State and local governments will provide another subset. Businesses and developers yet another set and finally customers will likely be the final arbiters if the new electricity market is to thrive.

As a final note, opening up information about the distribution system (which the utilities have jealously guarded for decades) offers an opportunity to better target other programs as well such as energy efficiency and the California Solar Initiative. Why should we waste money on air conditioning upgrades in San Francisco when they are much more needed in Bakersfield? The CPUC has an opportunity to step away from a moribund model in more than distribution planning if it pursues this to its natural conclusion.