Tag Archives: DER

In the LA Times – looking for alternative solutions to storm outages

I was interviewed by a Los Angeles Times reporter about the recent power outages in Northern California as result of the wave of storms. Our power went out for 48 hours New Year’s Eve and again for 12 hours the next weekend:

After three days without power during this latest storm series, Davis resident Richard McCann said he’s seriously considering implementing his own microgrid so he doesn’t have to rely on PG&E.

“I’ve been thinking about it,” he said. McCann, whose work focuses on power sector analysis, said his home lost power for about 48 hours beginning New Year’s Eve, then lost it again after Saturday for about 12 hours.

While the storms were severe across the state, McCann said Davis did not see unprecedented winds or flooding, adding to his concerns about the grid’s reliability.

He said he would like to see California’s utilities “distributing the system, so people can be more independent.”

“I think that’s probably a better solution rather than trying to build up stronger and stronger walls around a centralized grid,” McCann said.

Several others were quoted in the article offering microgrids as a solution to the ongoing challenge.

Widespread outages occurred in Woodland and Stockton despite winds not being exceptionally strong beyond recent experience. Given the widespread outages two years ago and the three “blue sky” multi hour outages we had in 2022 (and none during the September heat storm when 5,000 Davis customers lost power), I’m doubtful that PG&E is ready for what’s coming with climate change.

PG&E instead is proposing to invest up to $40 billion in the next eight years to protect service reliability for 4% of their customers via undergrounding wires in the foothills which will raise our rates up to 70% by 2030! There’s an alternative cost effective solution that would be 80% to 95% less sitting before the Public Utilities Commission but unlikely to be approved. There’s another opportunity to head off PG&E and send some of that money towards fixing our local grid coming up this summer under a new state law.

While winds have been strong, they have not been at the 99%+ range of experience that should lead to multiple catastrophic outcomes in short order. And having two major events within a week, plus the outage in December 2020 shows that these are not statistically unusual. We experienced similar fierce winds without such extended outages. Prior to 2020, Davis only experienced two extended outages in the previous two decades in 1998 and 2007. Clearly the lack of maintenance on an aging system has caught up with PG&E. PG&E should reimagine its rural undergrounding program to mitigate wildfire risk to use microgrids instead. That will free up most of the billons it plans to spend on less than 4% of its customer base to instead harden its urban grid.

The real lessons from California’s 2000-01 electricity crisis and what they mean for today’s markets

The recent reliability crises for the electricity markets in California and Texas ask us to reconsider the supposed lessons from the most significant extended market crisis to date– the 2000-01 California electricity crisis. I wrote a paper two decades ago, The Perfect Mess, that described the circumstances leading up to the event. There have been two other common threads about supposed lessons, but I do not accept either as being true solutions and are instead really about risk sharing once this type of crisis ensues rather than being useful for preventing similar market misfunctions. Instead, the real lesson is that load serving entities (LSEs) must be able to sign long-term agreements that are unaffected and unfettered directly or indirectly by variations in daily and hourly markets so as to eliminate incentives to manipulate those markets.

The first and most popular explanation among many economists is that consumers did not see the swings in the wholesale generation prices in the California Power Exchange (PX) and California Independent System Operator (CAISO) markets. In this rationale, if consumers had seen the large increases in costs, as much as 10-fold over the pre-crisis average, they would have reduced their usage enough to limit the gains from manipulating prices. Consumers should have shouldered the risks in the markets in this view and their cumulative creditworthiness could have ridden out the extended event.

This view is not valid for several reasons. The first and most important is that the compensation to utilities for stranded assets investment was predicated on calculating the difference between a fixed retail rate and the utilities cost of service for transmission and distribution plus the wholesale cost of power in the PX and CAISO markets. Until May 2000, that difference was always positive and the utilities were well on the way to collecting their Competition Transition Charge (CTC) in full before the end of the transition period March 31, 2002. The deal was if the utilities were going to collect their stranded investments, then consumers rates would be protected for that period. The risk of stranded asset recovery was entirely the utilities’ and both the California Public Utilities Commission in its string of decisions and the State Legislature in Assembly Bill 1890 were very clear about this assignment.

The utilities had chosen to support this approach linking asset value to ongoing short term market valuation over an upfront separation payment proposed by Commissioner Jesse Knight. The upfront payment would have enabled linking power cost variations to retail rates at the outset, but the utilities would have to accept the risk of uncertain forecasts about true market values. Instead, the utilities wanted to transfer the valuation risk to ratepayers, and in return ratepayers capped their risk at the current retail rates as of 1996. Retail customers were to be protected from undue wholesale market risk and the utilities took on that responsibility. The utilities walked into this deal willingly and as fully informed as any party.

As the transition period progressed, the utilities transferred their collected CTC revenues to their respective holding companies to be disbursed to shareholders instead of prudently them as reserves until the end of the transition period. When the crisis erupted, the utilities quickly drained what cash they had left and had to go to the credit markets. In fact, if they had retained the CTC cash, they would not have had to go the credit markets until January 2001 based on the accounts that I was tracking at the time and PG&E would not have had a basis for declaring bankruptcy.

The CTC left the market wide open to manipulation and it is unlikely that any simple changes in the PX or CAISO markets could have prevented this. I conducted an analysis for the CPUC in May 2000 as part of its review of Pacific Gas & Electric’s proposed divestiture of its hydro system based on a method developed by Catherine Wolfram in 1997. The finding was that a firm owning as little as 1,500 MW (which included most merchant generators at the time) could profitably gain from price manipulation for at least 2,700 hours in a year. The only market-based solution was for LSEs including the utilities to sign longer-term power purchase agreements (PPAs) for a significant portion (but not 100%) of the generators’ portfolios. (Jim Sweeney briefly alludes to this solution before launching to his preferred linkage of retail rates and generation costs.)

Unfortunately, State Senator Steve Peace introduced a budget trailer bill in June 2000 (as Public Utilities Code Section 355.1, since repealed) that forced the utilities to sign PPAs only through the PX which the utilities viewed as too limited and no PPAs were consummated. The utilities remained fully exposed until the California Department of Water Resources took over procurement in January 2001.

The second problem was a combination of unavailable technology and billing systems. Customers did not yet have smart meters and paper bills could lag as much as two months after initial usage. There was no real way for customers to respond in near real time to high generation market prices (even assuming that they would have been paying attention to such an obscure market). And as we saw in the Texas during Storm Uri in 2021, the only available consumer response for too many was to freeze to death.

This proposed solution is really about shifting risk from utility shareholders to ratepayers, not a realistic market solution. But as discussed above, at the core of the restructuring deal was a sharing of risk between customers and shareholders–a deal that shareholders failed to keep when they transferred all of the cash out of their utility subsidiaries. If ratepayers are going to take on the entire risk (as keeps coming up) then either authorized return should be set at the corporate bond debt rate or the utilities should just be publicly owned.

The second explanation of why the market imploded was that the decentralization created a lack of coordination in providing enough resources. In this view, the CDWR rescue in 2001 righted the ship, but the exodus of the community choice aggregators (CCAs) again threatens system integrity again. The preferred solution for the CPUC is now to reconcentrate power procurement and management with the IOUs, thus killing the remnants of restructuring and markets.

The problem is that the current construct of the PCIA exit fee similarly leaves the market open to potential manipulation. And we’ve seen how virtually unfettered procurement between 2001 and the emergence of the CCAs resulted in substantial excess costs.

The real lessons from the California energy crisis are two fold:

  • Any stranded asset recovery must be done as a single or fixed payment based on the market value of the assets at the moment of market formation. Any other method leaves market participants open to price manipulation. This lesson should be applied in the case of the exit fees paid by CCAs and customers using distributed energy resources. It is the only way to fairly allocate risks between customers and shareholders.
  • LSEs must be able unencumbered in signing longer term PPAs, but they also should be limited ahead of time in the ability to recover stranded costs so that they have significant incentives to prudently procure resources. California’s utilities still lack this incentive.

Understanding core facts before moving forward with NEM reform

There is a general understanding among the most informed participants and observers that California’ net energy metering (NEM) tariff as originally conceived was not intended to be a permanent fixture. The objective of the NEM rate was to get a nascent renewable energy industry off the ground and now California has more than 11,000 megawatts of distributed solar generation. Now that the distributed energy resources industry is in much less of a need for subsidies, but its full value also must be recognized. To this end it is important to understand some key facts that are sometimes overlooked in the debate.

The true underlying reason for high rates–rising utility revenue requirements

In California, retail electricity rates are so high for two reasons, the first being stranded generation costs and the second being a bunch of “public goods charges” that constitute close to half of the distribution cost. PG&E’s rates have risen 57% since 2009. Many, if not most, NEM customers have installed solar panels as one way to avoid these rising rates. The thing is when NEM 1.0 and 2.0 were adopted, the cost of the renewable power purchase agreements (PPA) portfolios were well over $100/MWH—even $120MWH through 2019, and adding in the other T&D costs, this approached the average system rate as late as 2019 for SCE and PG&E before their downward trends reversed course. That the retail rate skyrocketed while renewable PPAs fell dramatically is a subsequent development that too many people have forgotten.

California uses Ramsey pricing principles to allocate these (the CPUC applies “equal percent marginal costs” or EPMC as a derivative measure), but Ramsey pricing was conceived for one-way pricing. I don’t know what Harold Hotelling would think of using his late student’s work for two way transactions. This is probably the fundamental problem in NEM rates—the stranded and public goods costs are incurred by one party on one side of the ledger (the utility) but the other party (the NEM customer) doesn’t have these same cost categories on the other side of the ledger; they might have their own set of costs but they don’t fall into the same categories. So the issue is how to set two way rates given the odd relationships of these costs and between utilities and ratepayers.

This situation argues for setting aside the stranded costs and public goods to be paid for in some manner other than electric rates. The answer can’t be in a form of a shift of consumption charges to a large access charge (e.g., customer charge) because customers will just leave entirely when half of their current bill is rolled into the new access charge.

The largest nonbypassable charge (NBC), now delineated for all customers, is the power cost indifference adjustment (PCIA). The PCIA is the stranded generation asset charge for the portfolio composed of utility-scale generation. Most of this is power purchase agreements (PPAs) signed within the last decade. For PG&E in 2021 according to its 2020 General Rate Case workpapers, this exceeded 4 cents per kilowatt-hour.

Basic facts about the grid

  • The grid is not a static entity in which there are no changes going forward. Yet the cost of service analysis used in the CPUC’s recent NEM proposed decision assumes that posture. Acknowledging that the system will change going forward depending on our configuration decisions is an important key principle that is continually overlooked in these discussions.
  • In California, a customer is about 15 times more likely to experience an outage due to distribution system problems than from generation/transmission issues. That means that a customer who decides to rely on self-provided resources can have a set up that is 15 times less reliable than the system grid and still have better reliability than conventional service. This is even more true for customers who reside in rural areas.
  • Upstream of the individual service connection (which costs about $10 per month for residential customers based on testimony I have submitted in all three utilities’ rate cases), customers share distribution grid capacity with other customers. They are not given shares of the grid to buy and sell with other customers—we leave that task to the utilities who act as dealers in that market place, owning the capacity and selling it to customers. If we are going to have fixed charges for customers which essentially allocated a capacity share to each of them, those customers also should be entitled to buy and sell capacity as they need it. The end result will be a marketplace which will price distribution capacity on either a daily $ per kilowatt or cents per kilowatt-hour basis. That system will look just like our current distribution pricing system but with a bunch of unnecessary complexity.
  • This situation is even more true for transmission. There most certainly is not a fixed share of the transmission grid to be allocated to each customer. Those shares are highly fungible.

What is the objective of utility regulation: just and reasonable rates or revenue assurance?

At the core of this issue is the question of whether utility shareholders are entitled to largely guaranteed revenues to recover their investments. In a market with some level of competitiveness, the producers face a degree of risk under normal functional conditions (more mundane than wildfire risk)—that is not the case with electric utilities, at least in California. (We cataloged the amount of disallowances for California IOUs in the 2020 cost of capital applications and it was less than one one-hundredth of a percent (0.01%) of revenues over the last decade.) When customers reduce or change their consumption patterns in a manner that reduces sales in a normal market, other customers are not required to pick up the slack—shareholders are. This risk is one of the core benefits of a competitive market, no matter what the degree of imperfection. Neither the utilities or the generators who sell to them under contract face these risks.

Why should we bother with “efficient” pricing if we are pushing the entire burden of achieving that efficiency on customers who have little ability to alter utilities’ investment decisions? Bottom line: if economists argue for “efficient” pricing, they need to also include in that how utility shareholders will participate directly in the outcomes of that efficient pricing without simply shifting revenue requirements to other customers.

As to the intent of the utilities, in my 30 year on the ground experience, the management does not make decisions that are based on “doing good” that go against their profit objective. There are examples of each utility choosing to gain profits that they were not entitled to. We entered into testimony in PG&E’s 1999 GRC a speech by a PG&E CEO talking about how PG&E would exploit the transition period during restructuring to maintain market share. That came back to haunt the state as it set up the conditions for ensuing market manipulation.

Each of these issues have been largely ignored in the debate over what to do about solar rooftop policy and investment going forward. It is time to push these to fore.

Why are real-time electricity retail rates no longer important in California?

The California Public Utilities Commission (CPUC) has been looking at whether and how to apply real-time electricity prices in several utility rate applications. “Real time pricing” involves directly linking the bulk wholesale market price from an exchange such as the California Independent System Operator (CAISO) to the hourly retail price paid by customers. Other charges such as for distribution and public purpose programs are added to this cost to reach the full retail rate. In Texas, many retail customers have their rates tied directly or indirectly to the ERCOT system market that operates in a manner similar to CAISO’s. A number of economists have been pushing for this change as a key solution to managing California’s reliability issues. Unfortunately, the moment may have passed where this can have a meaningful impact.

In California, the bulk power market costs are less than 20% of the total residential rate. Even if we throw in the average capacity prices, it only reaches 25%. In addition, California has a few needle peaks a year compared to the much flatter, longer, more frequent near peak loads in the East due to the differences in humidity. The CAISO market can go years without real price deviations that are consequential on bills. For example, PG&E’s system average rate is almost 24 cents per kilowatt-hour (and residential is even higher). Yet, the average price in the CAISO market has remained at 3 to 4 cents per kilowatt-hour since 2001, and the cost of capacity has actually fallen to about 2 cents. Even a sustained period of high prices such as occurred last August will increase the average price by less than a penny–that’s less than 5% of the total rate. The story in 2005 was different, when this concept was first offered with an average rate of 13 cents per kilowatt-hour (and that was after the 4 cent adder from the energy crisis). In other words, the “variable” component just isn’t important enough to make a real difference.

Ahmad Faruqui who has been a long time advocate for dynamic retail pricing wrote in a LinkedIn comment:

“Airlines, hotels, car rentals, movie theaters, sporting events — all use time-varying rates. Even the simple parking meter has a TOU rate embedded in it.”

It’s true that these prices vary with time, and electricity prices are headed that way if not there already. Yet these industries don’t have prices that change instantly with changes in demand and resource availability–the prices are often set months ahead based on expectations of supply and demand, much as traditional electricity TOU rates are set already. Additionally, in all of these industries , the price variations are substantially less than 100%. But for electricity, when the dynamic price changes are important, they can be up to 1,000%. I doubt any of these industries would use pricing variations that large for practical reasons.

Rather than pointing out that this tool is available and some types of these being used elsewhere, we should be asking why the tool isn’t being used? What’s so different about electricity and are we making the right comparisons?

Instead, we might look at a different package to incorporate customer resources and load dynamism based on what has worked so far.

  • First is to have TOU pricing with predictable patterns. California largely already has this in place, and many customer groups have shown how they respond to this signal. In the Statewide Pilot on critical peak period price, the bulk of the load shifting occurred due to the implementation of a base TOU rate, and the CPP effect was relatively smaller.
  • Second, to enable more distributed energy resources (DER) is to have fixed price contracts akin to generation PPAs. Everyone understands the terms of the contracts then instead of the implicit arrangement of net energy metering (NEM) that is very unsatisfactory for everyone now. It also means that we have to get away from the mistaken belief that short-run prices or marginal costs represent “market value” for electricity assets.
  • Third for managing load we should have robust demand management/response programs that target the truly manageable loads, and we should compensate customers based on the full avoided costs created.

Relying on short term changes diminishes the promise of energy storage

1.-a-typical-lithium-ion-battery-system-nps-768x576

I posted this response on EDF’s blog about energy storage:

This post accepts too easily the conventional industry “wisdom” that the only valid price signals come from short term responses and effects. In general, storage and demand response is likely to lead to increased renewables investment even if in the short run GHG emissions increase. This post hints at that possibility, but it doesn’t make this point explicitly. (The only exception might be increased viability of baseloaded coal plants in the East, but even then I think that the lower cost of renewables is displacing retiring coal.)

We have two facts about the electric grid system that undermine the validity of short-term electricity market functionality and pricing. First, regulatory imperatives to guarantee system reliability causes new capacity to be built prior to any evidence of capacity or energy shortages in the ISO balancing markets. Second, fossil fueled generation is no longer the incremental new resource in much of the U.S. electricity grid. While the ISO energy markets still rely on fossil fueled generation as the “marginal” bidder, these markets are in fact just transmission balancing markets and not sources for meeting new incremental loads. Most of that incremental load is now being met by renewables with near zero operational costs. Those resources do not directly set the short-term prices. Combined with first shortcoming, the total short term price is substantially below the true marginal costs of new resources.

Storage policy and pricing should be set using long-term values and emission changes based on expected resource additions, not on tomorrow’s energy imbalance market price.

The 20-year cycle in the electricity world

future_grid_horizon_xl_410_282_c1

The electricity industry in California seems to face a new world about every 20 years.

  • In 1960, California was in a boom of building fossil-fueled power plants to supplement the hydropower that had been a prime motive source.
  • In 1980, the state was shifting focus from rapid growth and large central generation stations to increased energy efficiency and bringing in third-party power developers.
  • That set in motion the next wave of change two decades later. Slowing demand plus exorbitant power contract prices lead to restructuring with substantial divestiture of the utilities’ role in generating power. Unfortunately, that effort ended up half-baked due to several obvious flaws, but out of the wreckage emerged a shift to third-party renewable projects. However, the state still didn’t learn its lesson about how to set appropriate contract prices, and again rates skyrocketed.
  • This has now lead to yet another wave, with two paths. The first is the rapid emergence of distributed energy resources such at solar rooftops and garage batteries, and development of complementary technologies in electric vehicles and building electrification. The second is devolution of power resource acquisition to local entities (CCAs).

Electric industry tries the “big lie”

electricity-use-profile

The Edison Electric Institute has floated the idea that demand charges should be renamed as “efficiency rates.” Demand charges measure the maximum use in a month, and once a customer reaches that demand level in a month, a portion of the usage is free below that demand level. Providing power for free encourages more use, not less, which is the opposite of what “efficiency rates” should do.  Apparently this proposal is part of a larger effort to relabel everything that utilities find objectionable, such as distributed energy resources.

Demand charges can have a place in rate making, but the best such tool, made feasible by the rollout of “smart meters,” is daily demand charges that reset each day.

 

Commentary on CPUC Rate Design Workshop

cartoon

The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.

Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.

Carl Linvill from the Regulatory Assistance Project (RAP) made a contrasting presentation that incorporated those differences in temporal perspectives for marginal costs.

Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.

Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.

Fighting the last war: Study finds solar + storage uneconomic now  | from Utility Dive

“A Rochester Institute of Technology study says a customer must face high electricity bills and unfavorable net metering or feed-in policies for grid defection to work.”

Yet…this study used current battery costs (at $350/KW-Hr), ignoring probably cost decreases, and then made more restrictive assumptions about how such a system might work. It’s not clear if “defection” meant complete self sufficiency, or reducing the generation portion (which in California about half of electricity bill.) Regardless, the study shows that grid defection is cost-effective in Hawaii, confirm the RMI findings. Even so, RMI said it would take at least 10 years before such defection was cost-effective in even the high-cost states like New York and California.

A more interesting study would be to look at the “break-even” cost thresholds for solar panels and batteries to make these competitive with utility service. Then planners and decision makers could assess the likelihood of reaching those levels within a range of time periods.

Source: A study throws cold water on residential solar-plus-storage economics | Utility Dive

And then this…Trump’s energy plan doesn’t mention solar – The Washington Post

After the release of a study showing solar now employs more than oil, gas and coal combined.

Source: Trump’s energy plan doesn’t mention solar, an industry that just added 51,000 jobs – The Washington Post