Tag Archives: distributed energy resources

Part 1: A response to “Rooftop Solar Inequity”

Severin Borenstein at the Energy Institure at Haas has plunged into the politics of devising policies for rooftop solar systems. I respond to two of his blog posts in two parts here, with Part 1 today. I’ll start by posting a link to my earlier blog post that addresses many of the assertions here in detail. And I respond to to several other additional issues here.

First, the claims of rooftop solar subsidies has two fallacious premises. First, it double counts the stranded cost charge from poor portfolio procurement and management I reference above and discussed at greater length in my blog post. Take out that cost and the “subsidy” falls substantially. The second is that solar hasn’t displaced load growth. In reality utility loads and peak demand have been flat since 2006 and even declining over the last three years. Even the peak last August was 3,000 MW below the record in 2017 which in turn was only a few hundred MW above the 2006 peak. Rooftop solar has been a significant contributor to this decline. Displaced load means displaced distribution investment and gas fired generation (even though the IOUs have justified several billion in added investment by forecasted “growth” that didn’t materialized.) I have documented those phantom load growth forecasts in testimony at the CPUC since 2009. The cost of service studies supposedly showing these subsidies assume a static world in which nothing has changed with the introduction of rooftop solar. Of course nothing could be further from the truth.

Second TURN and Cal Advocates have all be pushing against decentralization of the grid for decades back to restructuring. Decentralization means that the forums at the CPUC become less important and their influence declines. They have all fought against CCAs for the same reason. They’ve been fighting solar rooftops almost since its inception as well. Yet they have failed to push for the incentives enacted in AB57 for the IOUs to manage their portfolios or to control the exorbitant contract terms and overabundance of early renewable contracts signed by the IOUs that is the primary reason for the exorbitant growth in rates.

Finally, there are many self citations to studies and others with the claim that the authors have no financial interest. E3 has significant financial interests in studies paid for by utilities, including the California IOUs. While they do many good studies, they also have produced studies with certain key shadings of assumptions that support IOUs’ positions. As for studies from the CPUC, commissioners frequently direct the expected outcome of these. The results from the Customer Choice Green Book in 2018 is a case in point. The CPUC knows where it’s political interests are and acts to satisfy those interests. (I have personally witnessed this first hand while being in the room.) Unfortunately many of the academic studies I see on these cost allocation issues don’t accurately reflect the various financial and regulatory arrangements and have misleading or incorrect findings. This happens simply because academics aren’t involved in the “dirty” process of ratemaking and can’t know these things from a distance. (The best academic studies are those done by those who worked in the bowels of those agencies and then went to academics.)

We are at a point where we can start seeing the additional benefits of decentralized energy resources. The most important may be the resilience to be gained by integrating DERs with EVs to ride out local distribution outages (which are 15 times more likely to occur than generation and transmission outages) once the utilities agree to enable this technology that already exists. Another may be the erosion of the political power wielded by large centralized corporate interests. (There was a recent paper showing how increasing market concentration has led to large wealth transfers to corporate shareholders since 1980.) And this debate has highlighted the elephant in the room–how utility shareholders have escaped cost responsibility for decades which has led to our expensive, wasteful system. We need to be asking this fundamental question–where is the shareholders’ skin in this game? “Obligation to serve” isn’t a blank check.

Transmission: the hidden cost of generation

The cost of transmission for new generation has become a more salient issue. The CAISO found that distributed generation (DG) had displaced $2.6 billion in transmission investment by 2018. The value of displacing transmission requirements can be determined from the utilities’ filings with FERC and the accounting for new power plant capacity. Using similar methodologies for calculating this cost in California and Kentucky, the incremental cost in both independent system operators (ISO) is $37 per megawatt-hour or 3.7 cents per kilowatt-hour in both areas. This added cost about doubles the cost of utility-scale renewables compared to distributed generation.

When solar rooftop displaces utility generation, particularly during peak load periods, it also displaces the associated transmission that interconnects the plant and transmits that power to the local grid. And because power plants compete with each other for space on the transmission grid, the reduction in bulk power generation opens up that grid to send power from other plants to other customers.

The incremental cost of new transmission is determined by the installation of new generation capacity as transmission delivers power to substations before it is then distributed to customers. This incremental cost represents the long-term value of displaced transmission. This amount should be used to calculate the net benefits for net energy metered (NEM) customers who avoid the need for additional transmission investment by providing local resources rather than remote bulk generation when setting rates for rooftop solar in the NEM tariff.

  • In California, transmission investment additions were collected from the FERC Form 1 filings for 2017 to 2020 for PG&E, SCE and SDG&E. The Wholesale Base Total Revenue Requirements submitted to FERC were collected for the three utilities for the same period. The average fixed charge rate for the Wholesale Base Total Revenue Requirements was 12.1% over that year. That fixed charge rate is applied to the average of the transmission additions to determine the average incremental revenue requirements for new transmission for the period. The plant capacity installed in California for 2017 to 2020 is calculated from the California Energy Commission’s “Annual Generation – Plant Unit”. (This metric is conservative because (1) it includes the entire state while CAISO serves only 80% of the state’s load and the three utilities serve a subset of that, and (2) the list of “new” plants includes a number of repowered natural gas plants at sites with already existing transmission. A more refined analysis would find an even higher incremental transmission cost.)

Based on this analysis, the appropriate marginal transmission cost is $171.17 per kilowatt-year. Applying the average CAISO load factor of 52%, the marginal cost equals $37.54 per megawatt-hour.

  • In Kentucky, Kentucky Power is owned by American Electric Power (AEP) which operates in the PJM ISO. PJM has a market in financial transmission rights (FTR) that values relieving the congestion on the grid in the short term. AEP files network service rates each year with PJM and FERC. The rate more than doubled over 2018 to 2021 at average annual increase of 26%.

Based on the addition of 22,907 megawatts of generation capacity in PJM over that period, the incremental cost of transmission was $196 per kilowatt-year or nearly four times the current AEP transmission rate. This equates to about $37 per megawatt-hour (or 3.7 cents per kilowatt-hour).

Outages highlight the need for a fundamental revision of grid planning

The salience of outages due to distribution problems such as occurred with record heat in the Pacific Northwest and California’s public safety power shutoffs (PSPS) highlights a need for a change in perspective on addressing reliability. In California, customers are 15 times more likely to experience an outage due to distribution issues rather than generation (well, really transmission outages as August 2020 was the first time that California experienced a true generation shortage requiring imposed rolling blackouts—withholding in 2001 doesn’t count.) Even the widespread blackouts in Texas in February 2021 are attributable in large part due to problems beyond just a generation shortage.

Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.

This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.

We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.

Microgrids could cost 10% of undergrounding PG&E’s wires

One proposed solution to reducing wildfire risk is for PG&E to put its grid underground. There are a number of problems with undergrounding including increased maintenance costs, seismic and flooding risks, and problems with excessive heat (including exploding underground vaults). But ignoring those issues, the costs could be exorbitant-greater than anyone has really considered. An alternative is shifting rural service to microgrids. A high-level estimate shows that using microgrids instead could cost less than 10% of undergrounding the lines in regions at risk. The CPUC is considering a policy shift to promote this type of solution and has new rulemaking on promoting microgrids.

We can put this in context by estimating costs from PG&E’s data provided in its 2020 General Rate Case, and comparing that to its total revenue requirements. That will give us an estimate of the rate increase needed to fund this effort.

PG&E has about 107,000 miles of distribution voltage wires and 18,500 in transmission lines. PG&E listed 25,000 miles of distribution lines being in wildfire risk zones. The the risk is proportionate for transmission this is another 4,300 miles. PG&E has estimated that it would cost $3 million per mile to underground (and ignoring the higher maintenance and replacement costs). And undergrounding transmission can cost as much as $80 million per mile. Using estimates provided to the CAISO and picking the midpoint cost adder of four to ten times for undergrounding, we can estimate $25 million per mile for transmission is reasonable. Based on these estimates it would cost $75 billion to underground distribution and $108 billion for transmission, for a total cost of $183 billion. Using PG&E’s current cost of capital, that translates into annual revenue requirement of $9.1 billion.

PG&E’s overall annual revenue requirement are currently about $14 billion and PG&E has asked for increases that could add another $3 billion. Adding $9.1 billion would add two-thirds (~67%) to PG&E’s overall rates that include both distribution and generation. It would double distribution rates.

This begs two questions:

  1. Is this worth doing to protect properties in the affected urban-wildlands interface (UWI)?
  2. Is there a less expensive option that can achieve the same objective?

On the first question, if we look the assessed property value in the 15 counties most likely to be at risk (which includes substantial amounts of land outside the UWI), the total assessed value is $462 billion. In other words, we would be spending 16% of the value of the property being protected. The annual revenue required would increase property taxed by over 250%, going from 0.77% to 2.0%.

Which turns us to the second question. If we assume that the load share is proportionate to the share of lines at risk, PG&E serves about 18,500 GWh in those areas. The equivalent cost per unit for undergrounding would be $480 per MWh.

The average cost for a microgrid in California based on a 2018 CEC study is $3.5 million per megawatt. That translates to $60 per MWh for a typical load factor. In other words a microgrid could cost one-eighth of undergrounding. The total equivalent cost compared to the undergrounding scenario would be $13 billion. This translates to an 8% increase in PG&E rates.

To what extent should we pursue undergrounding lines versus shifting to microgrid alternatives in the WUI areas? Should we encourage energy independence for these customers if they are on microgrids? How should we share these costs–should locals pay or should they be spread over the entire customer base? Who should own these microgrids: PG&E or CCAs or a local government?

 

 

 

 

Relying on short term changes diminishes the promise of energy storage

1.-a-typical-lithium-ion-battery-system-nps-768x576

I posted this response on EDF’s blog about energy storage:

This post accepts too easily the conventional industry “wisdom” that the only valid price signals come from short term responses and effects. In general, storage and demand response is likely to lead to increased renewables investment even if in the short run GHG emissions increase. This post hints at that possibility, but it doesn’t make this point explicitly. (The only exception might be increased viability of baseloaded coal plants in the East, but even then I think that the lower cost of renewables is displacing retiring coal.)

We have two facts about the electric grid system that undermine the validity of short-term electricity market functionality and pricing. First, regulatory imperatives to guarantee system reliability causes new capacity to be built prior to any evidence of capacity or energy shortages in the ISO balancing markets. Second, fossil fueled generation is no longer the incremental new resource in much of the U.S. electricity grid. While the ISO energy markets still rely on fossil fueled generation as the “marginal” bidder, these markets are in fact just transmission balancing markets and not sources for meeting new incremental loads. Most of that incremental load is now being met by renewables with near zero operational costs. Those resources do not directly set the short-term prices. Combined with first shortcoming, the total short term price is substantially below the true marginal costs of new resources.

Storage policy and pricing should be set using long-term values and emission changes based on expected resource additions, not on tomorrow’s energy imbalance market price.

The 20-year cycle in the electricity world

future_grid_horizon_xl_410_282_c1

The electricity industry in California seems to face a new world about every 20 years.

  • In 1960, California was in a boom of building fossil-fueled power plants to supplement the hydropower that had been a prime motive source.
  • In 1980, the state was shifting focus from rapid growth and large central generation stations to increased energy efficiency and bringing in third-party power developers.
  • That set in motion the next wave of change two decades later. Slowing demand plus exorbitant power contract prices lead to restructuring with substantial divestiture of the utilities’ role in generating power. Unfortunately, that effort ended up half-baked due to several obvious flaws, but out of the wreckage emerged a shift to third-party renewable projects. However, the state still didn’t learn its lesson about how to set appropriate contract prices, and again rates skyrocketed.
  • This has now lead to yet another wave, with two paths. The first is the rapid emergence of distributed energy resources such at solar rooftops and garage batteries, and development of complementary technologies in electric vehicles and building electrification. The second is devolution of power resource acquisition to local entities (CCAs).

Ahead of the tariff, U.S. imported 3 years worth of solar panels from China

1024x1024Panel imports were up 1,200 percent in fourth quarter 2017. That implies that installers were banking supplies to ride out the import tariff imposed by the Trump Administration. Unfortunately, it also means that the rapid technical and cost progress for panels may stall for that three year period.

Repost: Why utilities are more confident than ever about renewable energy growth | Utility Dive

“(O)nly 16% of respondents indicating integration is the most pressing problem. Instead, the election of Donald Trump appeared to have an impact on their fuel mix outlooks, with 35% of respondents indicating regulatory and market uncertainty are now the most pressing concern.”

Source: Why utilities are more confident than ever about renewable energy growth | Utility Dive

Fighting the last war: Study finds solar + storage uneconomic now  | from Utility Dive

“A Rochester Institute of Technology study says a customer must face high electricity bills and unfavorable net metering or feed-in policies for grid defection to work.”

Yet…this study used current battery costs (at $350/KW-Hr), ignoring probably cost decreases, and then made more restrictive assumptions about how such a system might work. It’s not clear if “defection” meant complete self sufficiency, or reducing the generation portion (which in California about half of electricity bill.) Regardless, the study shows that grid defection is cost-effective in Hawaii, confirm the RMI findings. Even so, RMI said it would take at least 10 years before such defection was cost-effective in even the high-cost states like New York and California.

A more interesting study would be to look at the “break-even” cost thresholds for solar panels and batteries to make these competitive with utility service. Then planners and decision makers could assess the likelihood of reaching those levels within a range of time periods.

Source: A study throws cold water on residential solar-plus-storage economics | Utility Dive