Tag Archives: reliability

In the LA Times – looking for alternative solutions to storm outages

I was interviewed by a Los Angeles Times reporter about the recent power outages in Northern California as result of the wave of storms. Our power went out for 48 hours New Year’s Eve and again for 12 hours the next weekend:

After three days without power during this latest storm series, Davis resident Richard McCann said he’s seriously considering implementing his own microgrid so he doesn’t have to rely on PG&E.

“I’ve been thinking about it,” he said. McCann, whose work focuses on power sector analysis, said his home lost power for about 48 hours beginning New Year’s Eve, then lost it again after Saturday for about 12 hours.

While the storms were severe across the state, McCann said Davis did not see unprecedented winds or flooding, adding to his concerns about the grid’s reliability.

He said he would like to see California’s utilities “distributing the system, so people can be more independent.”

“I think that’s probably a better solution rather than trying to build up stronger and stronger walls around a centralized grid,” McCann said.

Several others were quoted in the article offering microgrids as a solution to the ongoing challenge.

Widespread outages occurred in Woodland and Stockton despite winds not being exceptionally strong beyond recent experience. Given the widespread outages two years ago and the three “blue sky” multi hour outages we had in 2022 (and none during the September heat storm when 5,000 Davis customers lost power), I’m doubtful that PG&E is ready for what’s coming with climate change.

PG&E instead is proposing to invest up to $40 billion in the next eight years to protect service reliability for 4% of their customers via undergrounding wires in the foothills which will raise our rates up to 70% by 2030! There’s an alternative cost effective solution that would be 80% to 95% less sitting before the Public Utilities Commission but unlikely to be approved. There’s another opportunity to head off PG&E and send some of that money towards fixing our local grid coming up this summer under a new state law.

While winds have been strong, they have not been at the 99%+ range of experience that should lead to multiple catastrophic outcomes in short order. And having two major events within a week, plus the outage in December 2020 shows that these are not statistically unusual. We experienced similar fierce winds without such extended outages. Prior to 2020, Davis only experienced two extended outages in the previous two decades in 1998 and 2007. Clearly the lack of maintenance on an aging system has caught up with PG&E. PG&E should reimagine its rural undergrounding program to mitigate wildfire risk to use microgrids instead. That will free up most of the billons it plans to spend on less than 4% of its customer base to instead harden its urban grid.

Outages highlight the need for a fundamental revision of grid planning

The salience of outages due to distribution problems such as occurred with record heat in the Pacific Northwest and California’s public safety power shutoffs (PSPS) highlights a need for a change in perspective on addressing reliability. In California, customers are 15 times more likely to experience an outage due to distribution issues rather than generation (well, really transmission outages as August 2020 was the first time that California experienced a true generation shortage requiring imposed rolling blackouts—withholding in 2001 doesn’t count.) Even the widespread blackouts in Texas in February 2021 are attributable in large part due to problems beyond just a generation shortage.

Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.

This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.

We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.

ERCOT has the peak period scarcity price too high

The freeze and resulting rolling outages in Texas in February highlighted the unique structure of the power market there. Customers and businesses were left with huge bills that have little to do with actual generation expenses. This is a consequence of the attempt by Texas to fit into an arcane interpretation of an economic principle where generators should be able to recover their investments from sales in just a few hours of the year. Problem is that basic of accounting for those cashflows does not match the true value of the power in those hours.

The Electric Reliability Council of Texas (ERCOT) runs an unusual wholesale electricity market that supposedly relies solely on hourly energy prices to provide the incentives for incenting new generation investment. However, ERCOT is using the same type of administratively-set subsidies to create enough potential revenue to cover investment costs. Further, a closer examination reveals that this price adder is set too high relative to actual consumer value for peak load power. All of this leads to a conclusion relying solely on short-run hourly prices as a proxy for the market value that accrues to new entrants is a misplaced metric.

The total ERCOT market first relies on side payments to cover commitment costs (which creates barriers to entry but that’s a separate issue) and second, it transfers consumer value through to the Operating Reserve Demand Curve (ORDC) that uses a fixed value of lost load (VOLL) in an arbitrary manner to create “opportunity costs” (more on that definition at a later time) so the market can have sufficient scarcity rents. This second price adder is at the core of ERCOT’s incentive system–energy prices alone are insufficient to support new generation investment. Yet ERCOT has ignored basic economics and set this value too high based on both available alternatives to consumers and basic regional budget constraints.

I started with an estimate of the number of hours where prices need the ORDC to be at full VOLL of $9000/MWH to recover the annual revenue requirements of combustion turbine (CT) investment based on the parameters we collected for the California Energy Commission. It turns out to be about 20 to 30 hours per year. Even if the cost in Texas is 30% less, this is still more 15 hours annually, every single year or on average. (That has not been happening in Texas to date.) Note for other independent system operators (ISO) such as the California ISO (CAISO), the price cap is $1,000 to $2,000/MWH.

I then calculated the cost of a customer instead using a home generator to meet load during those hours assuming a life of 10 to 20 years on the generator. That cost should set a cap on the VOLL to residential customers as the opportunity cost for them. The average unit is about $200/kW and an expensive one is about $500/kW. That cost ranges from $3 to $5 per kWh or $3,000 to $5,000/MWH. (If storage becomes more prevalent, this cost will drop significantly.) And that’s for customers who care about periodic outages–most just ride out a distribution system outage of a few hours with no backup. (Of course if I experienced 20 hours a year of outage, I would get a generator too.) This calculation ignores the added value of using the generator for other distribution system outages created by events like a hurricane hitting every few years, as happens in Texas. That drives down this cost even further, making the $9,000/MWH ORDC adder appear even more distorted.

The second calculation I did was to look at the cost of an extended outage. I used the outages during Hurricane Harvey in 2017 as a useful benchmark event. Based on ERCOT and U.S. Energy Information Reports reports, it looks like 1.67 million customers were without power for 4.5 days. Using the Texas gross state product (GSP) of $1.9 trillion as reported by the St. Louis Federal Reserve Bank, I calculated the economic value lost over 4.5 days, assuming a 100% loss, at $1.5 billion. If we assume that the electricity outage is 100% responsible for that loss, the lost economic value per MWH is just under $5,000/MWH. This represents the budget constraint on willingness to pay to avoid an outage. In other words, the Texas economy can’t afford to pay $9,000/MWH.

The recent set of rolling blackouts in Texas provides another opportunity to update this budget constraint calculation in a different circumstance. This can be done by determining the reduction in electricity sales and the decrease in state gross product in the period.

Using two independent methods, I come up with an upper bound of $5,000/MWH, and likely much less. One commentator pointed out that ERCOT would not be able achieve a sufficient planning reserve level at this price, but that statement is based on the premises that short-run hourly prices reflect full market values and will deliver the “optimal” resource mix. Neither is true.

This type of hourly pricing overemphasizes peak load reliability value and undervalues other attributes such as sustainability and resilience. These prices do not reflect the full incremental cost of adding new resources that deliver additional benefits during non-peak periods such as green energy, nor the true opportunity cost that is exercised when a generator is interconnected rather than during later operations. Texas has overbuilt its fossil-fueled generation thanks to this paradigm. It needs an external market based on long-run incremental costs to achieve the necessary environmental goals.

PG&E fails to provide safety support in Davis

This article on a local webnews site, the Davis Vanguard, describes how PG&E was slow to respond and has since failed to communicate with homeowners about subsequent measures to be taken. Note that in this case, the power lines run down an easement through the backyards of these houses. 

Chasing gold at the end of the rainbow: how reliance on hourly markets doesn’t spur generation investment

dec2018cdrgraphic_550x624

Commentators have touted the Texas ERCOT market as the epitome of how a fully functioning hourly electricity market can deliver the economic signals needed to spur investment in new capacity. They further assert that this type of market can be technology neutral in what type of investment is made. The Federal Energy Regulatory Commission (FERC) largely adopted this position more than two decades ago when it initiated restructuring that led to the creation of these hourly markets, including the California Independent System Operator (CAISO). And FERC continues to take that stance, although it has allowed for short term capacity markets to backfill for reliability needs.

But now we hear that the Texas market is falling short in incenting new capacity investment. ERCOT which manages the Texas grid projects near term risks and a growing shortfall at least to 2024. At issue is the fact that waiting around for the gambler’s chance at price spike revenues doesn’t make a strong case for financing capital intensive generation, particularly if one’s own investment is likely to make those price spikes disappear. It’s like chasing the gold at the end of the rainbow!

This is another sign that hourly markets are not reliable indicators of market value, contrary to the view of proponents of those markets. The combination of the lumpiness of generation investment and the duration of that generation capital, how that new generation undermines the apparent value in the market, and the lack of political tolerance for failures in reliability or meeting environmental targets require that a much more holistic view of market value for these investments. The value of hedging risk, providing cost stability, improving reliability and resilience and reducing overall portfolio costs all need to be incorporated into a full valuation process.

Repost: A catalog of studies on whether renewables create grid instability | Greentech Media

GTM compiles the studies done over the last month in anticipation of the release of the study ordered by Energy Secretary Rick Perry to examine how increased renewable energy threatens grid reliability.

Source: The Rising Tide of Evidence Against Blaming Wind and Solar for Grid Instability | Greentech Media

Study shows investment and reliability are disconnected

Lawrence Berkeley National Laboratory released a study on how utility investment in transmission and distribution compares to changes in reliability. LBNL found that outages are increasing in number and duration nationally, and that levels of investment are not well correlated with improved reliability.

We testified on behalf of the Agricultural Energy Consumers Association in both the SCE and PG&E General Rate Cases about how distribution investment is not justified by the available data. Both utilities asked for $2 billion to meet “growth” yet both have seen falling demand since 2007. PG&E invested $360 million in its Cornerstone Improvement program, but a good question is, what is the cost-effectiveness of that improved reliability? Perhaps the new distribution resource planning exercise will redirect investment in a more rationale way.

Questions yet to be answered from the CAISO Symposium

While attending the CAISO Stakeholder Symposium last week I had rush of questions, not all interconnected, about how we manage the transition to the new energy future. I saw two very different views about how the grid might be managed–how will this be resolved? How do we consider path dependence in choosing supporting and “bridge” resources? How do we provide differential reliability to customers? How do we allow utilities to invest beyond the meter?

Jesse Knight, former CPUC Commissioner and now chairman at SDG&E and SCG, described energy utilities as the “last monopoly” in the face of a rapidly changing economic landscape. (Water utilities may have something to say about that.) SDG&E is ahead of the other utilities in recognizing the rise of the decentralized peer-to-peer economy.  On the other hand, Clark Gellings from EPRI described a world in which the transmission operator would have to see “millions” of nodes, both loads and small generators, to operate a robust network. This view is consistent with the continued central management implied by the utility distribution planners at the CPUC’s distribution planning OIR workshop. At the end of the symposium, 3 of the 4 panelist said that the electricity system would be unrecognizable to Thomas Edison. I wonder if Alexander Graham Bell would recognize our telecommunications system?

One question posed to the first “townhall” panel asked what role natural gas would have in the transition to more renewables. I am not aware of any studies conducted on whether and how choices about generation technology today commits us to decisions in the future. Path dependence is an oft overlooked aspect of planning. We can’t make decisions independent of how we chose in the past. That’s why it’s so difficult to move away from fossil fuel dependence now–we committed to it decades ago. We shouldn’t ignore path dependence going forward. Building gas plants now may commit us to using gas for decades until the financial investments are recovered. We may be able to buy our way out through stranded asset payments, but we learned once before that wasn’t a particularly attractive approach. Using forethought and incorporating flexibility requires careful planning.

And along those lines in our breakout session, another question was posed about how to resolve the looming threat of “overgeneration” from renewables, particularly solar.  Much of the problem might be resolved by charging EVs during the day, but it’s unlikely that a sizable number of plug-in hybrids and BEVs will be on the road before the mid-2020s. So the question becomes should we invest in gas-fired generation or battery or pumped storage, both of which have 20-30 year economic lives, or try to find other shorter lived transitions including curtailment contracts or demand response technologies until EVs arrive on the scene? It might even be cost effective to provide subsidies to accelerate adoption of EVs so as to avoid long-lived investments that may become prematurely obsolete.

Pricing for differential reliability among customers also came up. Often ignored in the reliability debate at the CAISO is that the vast majority of outages are at the distribution level. We appear to be overinvested in transmission and generation reliability at the expense of maintaining the integrity of the local grid. We could have system reliability prices that reflect costs of providing flexible service to follow on-site renewable generation. However the utilities already recover most of the capital costs of providing those services through rate of return regulation. The market prices are suppressed (as they are in the real time market where the IOUs dump excess power) so we can’t expect to see good price signals, yet.

Several people talked about partnerships with the utilities in investing in equipment beyond the meter. But the question is will a utility be willing to facilitate such investments if they degrade the value of its current investment in the grid? Fiduciary responsibility under traditional return on capital regulation says only if the cost of the new technology is higher so as to generate higher returns than the current grid investment. That doesn’t sound like a popular recipe for a new energy future.  Instead, we need to come up with creative means of utility shareholders participating in the new marketplace without forcing them down the old regulatory path.

Margaret Jolly from ConEd noted that the stakeholders were holding conversations on the new future but “the customer was not in the room.” Community, political and business leaders who know how electricity is used were not highly evident, and certainly didn’t make any significant statements. I’ve written before about offering more rate options to customers. I wanted to hear more from Ellen Struck about the Pecan Street study, but her comments focused on the industry situation, not customers’ behaviors and choices.