Tag Archives: climate change

Has rooftop solar cost California ratepayers more than the alternatives?

The Energy Institute’s blog has an important premise–that solar rooftop customers have imposed costs on other ratepayers with few benefits. This premise runs counter to the empirical evidence.

First, these customers have deferred an enormous amount of utility-scale generation. In 2005 the CEC forecasted the 2020 CAISO peak load would 58,662 MW. The highest peak after 2006 has been 50,116 MW (in 2017–3,000 MW higher than in August 2020). That’s a savings of 8,546 MW. (Note that residential installations are two-thirds of the distributed solar installations.) The correlation of added distributed solar capacity with that peak reduction is 0.938. Even in 2020, the incremental solar DER was 72% of the peak reduction trend. We can calculate the avoided peak capacity investment from 2006 to today using the CEC’s 2011 Cost of Generation model inputs. Combustion turbines cost $1,366/kW (based on a survey of the 20 installed plants–I managed that survey) and the annual fixed charge rate was 15.3% for a cost of $209/kW-year. The total annual savings is $1.8 billion. The total revenue requirements for the three IOUs plus implied generation costs for DA and CCA LSEs in 2021 was $37 billion. So the annual savings that have accrued to ALL customers is 4.9%. Given that NEM customers are about 4% of the customer base, if those customers paid nothing, everyone else’s bill would only go up by 4% or less than what rooftop solar has saved so far.

In addition, the California Independent System Operator (CAISO) calculated in 2018 that at least $2.6 billion in transmission projects had been deferred through installed distributed solar. Using the amount installed in 2017 of 6,785 MW, the avoided costs are $383/kW or $59/kW-year. This translates to an additional $400 million per year or about 1.1% of utility revenues.

The total savings to customers is over $2.2 billion or about 6% of revenue requirements.

Second, rooftop solar isn’t the most expensive power source. My rooftop system installed in 2017 costs 12.6 cents/kWh (financed separately from our mortgage). In comparison, PG&E’s RPS portfolio cost over 12 cents/kWh in 2019 according to the CPUC’s 2020 Padilla Report, plus there’s an increments transmission cost approaching 4 cents/kWh, so we’re looking at a total delivered cost of 16 cents/kwh for existing renewables. (Note that the system costs to integrate solar are largely the same whether they are utility scale or distributed).

Comparing to the average IOU RPS portfolio cost to that of rooftop solar is appropriate from the perspective of a customer. Utility customers see average, not marginal, costs and average cost pricing is widely prevalent in our economy. To achieve 100% renewable power a reasonable customer will look at average utility costs for the same type of power. We use the same principle by posting on energy efficient appliances the expect bill savings based on utility rates–-not on the marginal resource acquisition costs for the utilities.

And customers who would choose to respond to the marginal cost of new utility power instead will never really see those economic savings because the supposed savings created by that decision will be diffused across all customers. In other words, other customers will extract all of the positive rents created by that choice. We could allow for bypass pricing (which industrial customers get if they threaten to leave the service area) but currently we force other customers to bear the costs of this type of pricing, not shareholders as would occur in other industries. Individual customers are currently the decision making point of view for most energy use purposes and they base those on average cost pricing, so why should we have a single carve out for a special case that is quite similar to energy efficiency?

I wrote more about whether a fixed connection cost is appropriate for NEM customers and the complexity of calculating that charge earlier this week.

Understanding core facts before moving forward with NEM reform

There is a general understanding among the most informed participants and observers that California’ net energy metering (NEM) tariff as originally conceived was not intended to be a permanent fixture. The objective of the NEM rate was to get a nascent renewable energy industry off the ground and now California has more than 11,000 megawatts of distributed solar generation. Now that the distributed energy resources industry is in much less of a need for subsidies, but its full value also must be recognized. To this end it is important to understand some key facts that are sometimes overlooked in the debate.

The true underlying reason for high rates–rising utility revenue requirements

In California, retail electricity rates are so high for two reasons, the first being stranded generation costs and the second being a bunch of “public goods charges” that constitute close to half of the distribution cost. PG&E’s rates have risen 57% since 2009. Many, if not most, NEM customers have installed solar panels as one way to avoid these rising rates. The thing is when NEM 1.0 and 2.0 were adopted, the cost of the renewable power purchase agreements (PPA) portfolios were well over $100/MWH—even $120MWH through 2019, and adding in the other T&D costs, this approached the average system rate as late as 2019 for SCE and PG&E before their downward trends reversed course. That the retail rate skyrocketed while renewable PPAs fell dramatically is a subsequent development that too many people have forgotten.

California uses Ramsey pricing principles to allocate these (the CPUC applies “equal percent marginal costs” or EPMC as a derivative measure), but Ramsey pricing was conceived for one-way pricing. I don’t know what Harold Hotelling would think of using his late student’s work for two way transactions. This is probably the fundamental problem in NEM rates—the stranded and public goods costs are incurred by one party on one side of the ledger (the utility) but the other party (the NEM customer) doesn’t have these same cost categories on the other side of the ledger; they might have their own set of costs but they don’t fall into the same categories. So the issue is how to set two way rates given the odd relationships of these costs and between utilities and ratepayers.

This situation argues for setting aside the stranded costs and public goods to be paid for in some manner other than electric rates. The answer can’t be in a form of a shift of consumption charges to a large access charge (e.g., customer charge) because customers will just leave entirely when half of their current bill is rolled into the new access charge.

The largest nonbypassable charge (NBC), now delineated for all customers, is the power cost indifference adjustment (PCIA). The PCIA is the stranded generation asset charge for the portfolio composed of utility-scale generation. Most of this is power purchase agreements (PPAs) signed within the last decade. For PG&E in 2021 according to its 2020 General Rate Case workpapers, this exceeded 4 cents per kilowatt-hour.

Basic facts about the grid

  • The grid is not a static entity in which there are no changes going forward. Yet the cost of service analysis used in the CPUC’s recent NEM proposed decision assumes that posture. Acknowledging that the system will change going forward depending on our configuration decisions is an important key principle that is continually overlooked in these discussions.
  • In California, a customer is about 15 times more likely to experience an outage due to distribution system problems than from generation/transmission issues. That means that a customer who decides to rely on self-provided resources can have a set up that is 15 times less reliable than the system grid and still have better reliability than conventional service. This is even more true for customers who reside in rural areas.
  • Upstream of the individual service connection (which costs about $10 per month for residential customers based on testimony I have submitted in all three utilities’ rate cases), customers share distribution grid capacity with other customers. They are not given shares of the grid to buy and sell with other customers—we leave that task to the utilities who act as dealers in that market place, owning the capacity and selling it to customers. If we are going to have fixed charges for customers which essentially allocated a capacity share to each of them, those customers also should be entitled to buy and sell capacity as they need it. The end result will be a marketplace which will price distribution capacity on either a daily $ per kilowatt or cents per kilowatt-hour basis. That system will look just like our current distribution pricing system but with a bunch of unnecessary complexity.
  • This situation is even more true for transmission. There most certainly is not a fixed share of the transmission grid to be allocated to each customer. Those shares are highly fungible.

What is the objective of utility regulation: just and reasonable rates or revenue assurance?

At the core of this issue is the question of whether utility shareholders are entitled to largely guaranteed revenues to recover their investments. In a market with some level of competitiveness, the producers face a degree of risk under normal functional conditions (more mundane than wildfire risk)—that is not the case with electric utilities, at least in California. (We cataloged the amount of disallowances for California IOUs in the 2020 cost of capital applications and it was less than one one-hundredth of a percent (0.01%) of revenues over the last decade.) When customers reduce or change their consumption patterns in a manner that reduces sales in a normal market, other customers are not required to pick up the slack—shareholders are. This risk is one of the core benefits of a competitive market, no matter what the degree of imperfection. Neither the utilities or the generators who sell to them under contract face these risks.

Why should we bother with “efficient” pricing if we are pushing the entire burden of achieving that efficiency on customers who have little ability to alter utilities’ investment decisions? Bottom line: if economists argue for “efficient” pricing, they need to also include in that how utility shareholders will participate directly in the outcomes of that efficient pricing without simply shifting revenue requirements to other customers.

As to the intent of the utilities, in my 30 year on the ground experience, the management does not make decisions that are based on “doing good” that go against their profit objective. There are examples of each utility choosing to gain profits that they were not entitled to. We entered into testimony in PG&E’s 1999 GRC a speech by a PG&E CEO talking about how PG&E would exploit the transition period during restructuring to maintain market share. That came back to haunt the state as it set up the conditions for ensuing market manipulation.

Each of these issues have been largely ignored in the debate over what to do about solar rooftop policy and investment going forward. It is time to push these to fore.

A misguided perspective on California’s rooftop solar policy

Severin Borenstein at the Energy Institute at Haas has taken another shot at solar rooftop net energy metering (NEM). He has been a continual critic of California’s energy decentralization policies such as those on distribution energy resources (DER) and community choice aggregators (CCAs). And his viewpoints have been influential at the California Public Utilities Commission.

I read these two statements in his blog post and come to a very different conclusions:

“(I)ndividuals and businesses make investments in response to those policies, and many come to believe that they have a right to see those policies continue indefinitely.”

Yes, the investor owned utilities and certain large scale renewable firms have come to believe that they have a right to see their subsidies continue indefinitely. California utilities are receiving subsidies amounting to $5 billion a year due to poor generation portfolio management. You can see this in your bill with the PCIA. This dwarfs the purported subsidy from rooftop solar. Why no call for reforming how we recover these costs from ratepayers and force shareholder to carry their burden? (And I’m not even bringing up the other big source of rate increases in excessive transmission and distribution investment.)

Why wasn’t there a similar cry against bailing out PG&E in not one but TWO bankruptcies? Both PG&E and SCE have clearly relied on the belief that they deserve subsidies to continue staying in business. (SCE has ridden along behind PG&E in both cases to gain the spoils.) The focus needs to be on ALL players here if these types of subsidies are to be called out.

“(T)he reactions have largely been about how much subsidy rooftop solar companies in California need in order to stay in business.”

We are monitoring two very different sets of media then. I see much more about the ability of consumers to maintain an ability to gain a modicum of energy independence from large monopolies that compel that those consumers buy their service with no viable escape. I also see a reactions about how this will undermine directly our ability to reduce GHG emissions. This directly conflicts with the CEC’s Title 24 building standards that use rooftop solar to achieve net zero energy and electrification in new homes.

Along with the effort to kill CCAs, the apparent proposed solution is to concentrate all power procurement into the hands of three large utilities who haven’t demonstrated a particularly adroit ability at managing their portfolios. Why should we put all of our eggs into one (or three) baskets?

Borenstein continues to rely on an incorrect construct for cost savings created by rooftop solar that relies on short-run hourly wholesale market prices instead of the long-term costs of constructing new power plants, transmission rates derived from average embedded costs instead of full incremental costs and an assumption that distribution investment is not avoided by DER contrary to the methods used in the utilities’ own rate filings. He also appears to ignore the benefits of co-locating generation and storage locally–a set up that becomes much less financially viable if a customer adds storage but is still connected to the grid.

Yes, there are problems with the current compensation model for NEM customers, but we also need to recognize our commitments to customers who made investments believing they were doing the right thing. We need to acknowledge the savings that they created for all of us and the push they gave to lower technology costs. We need to recognize the full set of values that these customers provide and how the current electric market structure is too broken to properly compensate what we want customers to do next–to add more storage. Yet, the real first step is to start at the source of the problem–out of control utility costs that ratepayers are forced to bear entirely.

What “Don’t Look Up” really tells us

The movie Don’t Look Up has been getting “two thumbs up” from a certain political segment for speaking to truth in their view. An existential threat from a comet is used metaphorically to describe the resistance to the import of climate change risk. After watching the film I have a somewhat different take away that speaks a different truth to those viewers who found the message resonating most. Instead of blaming our political system, we should have a different take away that we can act on collectively.

Don’t Look Up reveals several errors and blind spots in the scientific and activist communities in communicating with the public and influencing decision making. The first is a mistaken belief that the public is actually interested in scientific study beyond parlor room tricks. The second is believing that people will act solely based on shrill warnings from scientists acting as high priests. The third (which isn’t addressed in the film) is failing to fully acknowledge what people see that they may lose by responding to these calls for change. Instead these communities should reconsider what they focus on and how they communicate.

The movie opens with the first error–the astronomers’ long winded attempt to explain all of the analysis that went into their prediction. Most people don’t see how science has any direct influence on their lives–how is digging up dinosaurs or discovering the outer bounds of the universe relevant to every day living? It’s a failure of our education system, but we can’t correct to help now. Over the last several years the message on climate change has changed to highlight the apparent effects on storms and heat waves, but someone living in Kansas doesn’t see how rising sea levels will affect them. A long explanation about the mechanics and methods just loses John Q. Public (although there is a small cadre that is fascinated) and they tune out. It’s hard to be disciplined with a simple message when you find the deeper complexity interesting, but that’s what it will take.

Shrill warnings have never been well received, no matter the call. We see that today with the resistance to measures to suppress the COVID-19 pandemic. James Hansen at NASA first raised the alarm about climate change in the 1980s but he was largely ignored due to his righteousness and arrogance in public. He made a serious error in stepping well outside of his expertise to assert particular solutions. The public has always looked to who they view as credible, regardless of their credentials, for guidance. Academics have too often assumed that they deserve this respect simply because they have “the” credential. That much of the public views science as mysterious with little more basis than religion does not help the cause. Instead, finding the right messengers is key to being successful.

Finally, and importantly overlooked in the film, a call to action of this magnitude requires widespread changes in behaviors and investments. People generally have worked hard to achieve what they have and are risk averse to such changes that may severely erode their financial well-being. For example, as many as 1 in 5 private sector jobs are tied to automobiles and fossil fuel production. One might extoll the economic benefits of switching to renewable electricity but workers and investors in these sectors are uncertain about their futures with no clear pathways to share in this new prosperity. Without addressing a truly valid means of resolving these risks beyond the tired “retraining” shibboleth, this core and its sympathizers will resist meaningful change.

Effecting these solutions likely require sacrifice from those who benefit from these changes. Pointing to benefit-cost analyses that rely on a “faux” hypothetical transaction to justify these solutions really is no better than the wealthy asserting asserting that they deserve to keep most of their financial gains simply because that’s how the market works. Compensating owners of these assets and making what appears to be inefficient decisions to maintain impacted communities may seem unfair for a variety of reasons, but we need to overcome our biases embedded in our favored solutions to move forward.

The scale economy myth of electric utilities

Vibrant Clean Energy released a study showing that inclusion of large amounts of distributed energy resources (DERs) can lower the costs of achieving 100% renewable energy. Commentors here have criticized the study for several reasons, some with reference to the supposed economies of scale of the grid.

While economies of scale might hold for individual customers in the short run, the data I’ve been evaluating for the PG&E and SCE general rate cases aren’t necessarily consistent with that notion. I’ve already discussed here the analysis I conducted in both the CAISO and PJM systems that show marginal transmission costs that are twice the current transmission rates. The rapid rise in those rates over the last decade are consistent with this finding. If economies of scale did hold for the transmission network, those rates should be stable or falling.

On the distribution side, the added investment reported in those two utilities’ FERC Form 1 are not consistent with the marginal costs used in the GRC filings. For example the added investment reported in Form 1 for final service lines (transmission, services, meters or TSM) appears to be almost 10 times larger than what is implied by the marginal costs and new customers in the GRC filings. And again the average cost of distribution is rising while energy and peak loads have been flat across the CAISO area since 2006. The utilities have repeatedly asked for $2 billion each GRC for “growth” in distribution, but given the fact that load has been flat (and even declining in 2019 and 2020), that means there’s likely a significant amount of stranded distribution infrastructure. If that incremental investment is for replacement (which is not consistent with either their depreciation schedules or their assertions about the true life of their facilties and the replacement costs within their marginal cost estimates), then they are grossly underestimating the future replacement cost for facilities which means they are underestimating the true marginal costs.

I can see a future replacement liability right outside my window. The electric poles were installed by PG&E 60+ years ago and the poles are likely reaching the end of their lives. I can see the next step moving to undergrounding the lines at a cost of $15,000 to $25,000 per house based on the ongoing mobilehome conversion program and the typical Rule 20 undergrounding project. Deferring that cost is a valid DER value. We will have to replace many services over the next several decades. And that doesn’t address the higher voltage parts of the system.

We have a counterexample of a supposed monopoly in the cable/internet system. I have at least two competing options where I live. The cell phone network also turned out not to be a natural monopoly. In an area where the PG&E and Merced ID service territories overlap, there are parallel distribution systems. The claim of a “natural monopoly” more likely is a legal fiction that protects the incumbent utility and is simpler for local officials to manage when awarding franchises.

If the claim of natural monopolies in electricity were true, then the distribution rate components for SCE and PG&E should be much lower than for smaller munis such as Palo Alto or Alameda. But that’s not the case. The cost advantages for SMUD and Roseville are larger than can be simply explained by differences in cost of capital. The Division/Office of Ratepayer Advocates commissioned a study by Christensen Associates for PG&E’s 1999 GRC that showed that the optimal utility size was about 500,000 customers. (PG&E’s witness who was a professor at UC Berkeley inadvertently confirmed the results and Commissioner Richard Bilas, a Ph.D. economist, noted this in his proposed decision which was never adopted because it was short circuited by restructuring.) Given that finding, that means that the true marginal cost of a customer and associated infrastructure is higher than the average cost. The likely counterbalancing cause is an organizational diseconomy of scale that overwhelms the technological benefits of size.

Finally, generation no longer shows the economies of scale that dominated the industry. The modularity of combined cycle plants and the efficiency improvement of CTs started the industry down the rode toward the efficiency of “smallness.” Solar plants are similarly modular. The reason why additional solar generation appears so low cost is because much of that is from adding another set of panels to an existing plant while avoiding additional transmission interconnection costs (which is the lion’s share of the costs that create what economies of scale do exist.)

The VCE analysis looks a holistic long term analysis. It relies on long run marginal costs, not the short run MCs that will never converge on the LRMC due to the attributes of the electricity system as it is regulated. The study should be evaluated in that context.

A new agricultural electricity use forecast method holds promise for water use management

Agricultural electricity demand is highly sensitive to water availability. Under “normal” conditions, the State Water Project (SWP) and Central Valley Project (CVP), as well as other surface water supplies, are key sources of irrigation water for many California farmers. Under dry conditions, these water sources can be sharply curtailed, even eliminated, at the same time irrigation requirements are heightened. Farmers then must rely more heavily on groundwater, which requires greater energy to pump than surface water, since groundwater must be lifted from deeper depths.

Over extended droughts, like between 2012 to 2016, groundwater levels decline, and must be pumped from ever deeper depths, requiring even more energy to meet crops’ water needs. As a result, even as land is fallowed in response to water scarcity, significantly more energy is required to water remaining crops and livestock. Much less pumping is necessary in years with ample surface water supply, as rivers rise, soils become saturated, and aquifers recharge, raising groundwater levels.

The surface-groundwater dynamic results in significant variations in year-to-year agricultural electricity sales. Yet, PG&E has assigned the agricultural customer class a revenue responsibility based on the assumption that “normal” water conditions will prevail every year, without accounting for how inevitable variations from these circumstances will affect rates and revenues for agricultural and other customers.

This assumption results in an imbalance in revenue collection from the agricultural class that does not correct itself even over long time periods, harming agricultural customers most in drought years, when they can least afford it. Analysis presented presented by M.Cubed on behalf of the Agricultural Energy Consumers Association (AECA) in the 2017 PG&E General Rate Case (GRC) demonstrated that overcollections can be expected to exceed $170 million over two years of typical drought conditions, with the expected overcollection $34 million in a two year period. This collection imbalance also increases rate instability for other customer classes.

Figure-1 compares the difference between forecasted loads for agriculture and system-wide used to set rates in the annual ERRA Forecast proceedings (and in GRC Phase 2 every three years) and the actual recorded sales for 1995 to 2019. Notably, the single largest forecasting error for system-wide load was a sales overestimate of 4.5% in 2000 and a shortfall in 2019 of 3.7%, while agricultural mis-forecasts range from an under-forecast of 39.2% in the midst of an extended drought in 2013 to an over-forecast of 18.2% in one of the wettest years on record in 1998. Load volatility in the agricultural sector is extreme in comparison to other customer classes.

Figure-2 shows the cumulative error caused by inadequate treatment of agricultural load volatility over the last 25 years. An unbiased forecasting approach would reflect a cumulative error of zero over time. The error in PG&E’s system-wide forecast has largely balanced out, even though the utility’s load pattern has shifted from significant growth over the first 10 years to stagnation and even decline. PG&E apparently has been able to adapt its forecasting methods for other classes relatively well over time.

The accumulated error for agricultural sales forecasting tells a different story. Over a quarter century the cumulative error reached 182%, nearly twice the annual sales for the Agricultural class. This cumulative error has consequences for the relative share of revenue collected from agricultural customers compared to other customers, with growers significantly overpaying during the period.

Agricultural load forecasting can be revised to better address how variations in water supply availability drive agricultural load. Most importantly, the final forecast should be constructed from a weighted average of forecasted loads under normal, wet and dry conditions. The forecast of agricultural accounts also must be revamped to include these elements. In addition, the load forecast should include the influence of rates and a publicly available data source on agricultural income such as that provided by the USDA’s Economic Research Service.

The Forecast Model Can Use An Additional Drought Indicator and Forecasted Agricultural Rates to Improve Its Forecast Accuracy

The more direct relationship to determine agricultural class energy needs is between the allocation of surface water via state and federal water projects and the need to pump groundwater when adequate surface water is not available from the SWP and federal CVP. The SWP and CVP are critical to California agriculture because little precipitation falls during the state’s Mediterranean-climate summer and snow-melt runoff must be stored and delivered via aqueducts and canals. Surface water availability, therefore, is the primary determinant of agricultural energy use, while precipitation and related factors, such as drought, are secondary causes in that they are only partially responsible for surface water availability. Other factors such as state and federal fishery protections substantially restrict water availability and project pumping operations greatly limiting surface water deliveries to San Joaquin Valley farms.

We found that the Palmer Drought Stress Index (PDSI) is highly correlated with contract allocations for deliveries through the SWP and CVP, reaching 0.78 for both of them, as shown in Figure AECA-3. (Note that the correlation between the current and lagged PDSI is only 0.34, which indicates that both variables can be included in the regression model.) Of even greater interest and relevance to PG&E’s forecasting approach, the correlation with the previous year’s PDSI and project water deliveries is almost as strong, 0.56 for the SWP and 0.53 for the CVP. This relationship can be seen also in Figure-3, as the PDSI line appears to lead changes in the project water deliveries. This strong relationship with this lagged indicator is not surprising, as both the California Department of Water Resources and U.S. Bureau of Reclamation account for remaining storage and streamflow that is a function of soil moisture and aquifers in the Sierras.

Further, comparing the inverse of water delivery allocations, (i.e., the undelivered contract shares), to the annual agricultural sales, we can see how agricultural load has risen since 1995 as the contract allocations delivered have fallen (i.e., the undelivered amount has risen) as shown in Figure-4. The decline in the contract allocations is only partially related to the amount of precipitation and runoff available. In 2017, which was among the wettest years on record, SWP Contractors only received 85% of their allocations, while the SWP provided 100% every year from 1996 to 1999. The CVP has reached a 100% allocation only once since 2006, while it regularly delivered above 90% prior to 2000. Changes in contract allocations dictated by regulatory actions are clearly a strong driver in the growth of agricultural pumping loads but an ongoing drought appears to be key here. The combination of the forecasted PDSI and the lagged PDSI of the just concluded water year can be used to capture this relationship.

Finally, a “normal” water year rarely occurs, occurring in only 20% of the last 40 years. Over time, the best representation of both surface water availability and the electrical load dependent on it is a weighted average across the probabilities of different water year conditions.

Proposed Revised Agricultural Forecast

We prepared a new agricultural load forecast for 2021 implementing the changes recommended herein. In addition, the forecasted average agricultural rate was added, which was revealed to be statistically valid. The account forecast was developed using most of the same variables as for the sales forecast to reflect similarities in drivers of both sales and accounts.

Figure-5 compares the performance of AECA’s proposed model to PG&E’s model filed in its 2021 General Rate Case. The backcasted values from the AECA model have a correlation coefficient of 0.973 with recorded values,[1] while PG&E’s sales forecast methodology only has a correlation of 0.742.[2] Unlike PG&E’s model almost all of the parameter estimates are statistically valid at the 99% confidence interval, with only summer and fall rainfall being insignificant.[3]

AECA’s accounts forecast model reflects similar performance, with a correlation of 0.976. The backcast and recorded data are compared in Figure-6. For water managers, this chart shows how new groundwater wells are driven by a combination of factors such as water conditions and electricity prices.




Advanced power system modeling need not mean more complex modeling

A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.

But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.

Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.

The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.

We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.

Calculating the risk reduction benefits of closing Germany’s nuclear plants

Max Aufhammer at the Energy Institute at Haas posted a discussion of this recent paper reviewing the benefits and costs of the closure of much of the German nuclear fleet after the Fukushima accident in 2011.

Quickly reading the paper, I don’t see how the risk of a nuclear accident is computed, but it looks like the value per MWH was taken from a different paper. So I did a quick back of the envelope calculation for the benefit of the avoided consequences of an accident. This paper estimates a risk of an accident once every 3,704 reactor-operating years (which is very close to a calculation I made a few years ago). (There are other estimates showing significant risk as well.) For 10 German reactors, this translates to 0.27% per year.

However, this is not a one-off risk, but rather a cumulative risk over time, as noted in the referenced study. This is akin to the seismic risk on the Hayward Fault that threatens the Delta levees, and is estimated at 62% over the next 30 years. For the the German plants, this cumulative probability over 30 years is 8.4%. Using the Fukushima damages noted in the paper, this represents $25 to $63 billion. Assuming an average annual output of 7,884 GWH, the benefit from risk reduction ranges from $11 to $27 per MWH.

The paper appears to make a further error in using only the short-run nuclear fuel costs of $10 per MWH as representing the avoided costs created by closing the plants. Additional avoided costs include avoided capital additions that accrue with refueling and plant labor and O&M costs. For Diablo Canyon, I calculated in PG&E’s 2019 ERRA proceeding that these costs were close to an additional $20 per MWH. I don’t know the values for the German plants, but clearly they should be significant.

Nuclear vs. storage: which is in our future?

Two articles with contrasting views of the future showed up in Utility Dive this week. The first was an opinion piece by an MIT professor referencing a study he coauthored comparing the costs of an electricity network where renewables supply more than 40% of generation compared to using advanced nuclear power. However, the report’s analysis relied on two key assumptions:

  1. Current battery storage costs are about $300/kW-hr and will remain static into the future.
  2. Current nuclear technology costs about $76 per MWh and advanced nuclear technology can achieve costs of $50 per MWh.

The second article immediately refuted the first assumption in the MIT study. A report from BloombergNEF found that average battery storage prices fell to $156/kW-hr in 2019, and projected further decreases to $100/kW-hr by 2024.

The reason that this price drop is so important is that, as the MIT study pointed out, renewables will be producing excess power at certain times and underproducing during other peak periods. MIT assumes that system operators will have to curtail renewable generation during low load periods and run gas plants to fill in at the peaks. (MIT pointed to California curtailing about 190 GWh in April. However, that added only 0.1% to the CAISO’s total generation cost.) But if storage is so cheap, along with inexpensive solar and wind, additional renewable capacity can be built to store power for the early evening peaks. This could enable us to free ourselves from having to plan for system peak periods and focus largely on energy production.

MIT’s second assumption is not validated by recent experience. As I posted earlier, the about to be completed Vogtle nuclear plant will cost ratepayers in Georgia and South Carolina about $100 per MWh–more than 30% more than the assumption used by MIT. PG&E withdrew its relicensing request for Diablo Canyon because the utility projected the cost to be $100 to $120 per MWh. Another recent study found nuclear costs worldwide exceeded $100/MWh and it takes an average of a decade finish a plant.

Another group at MIT issued a report earlier intended to revive interest in using nuclear power. I’m not sure of why MIT is so focused on this issue and continuing to rely on data and projections that are clearly outdated or wrong, but it does have one of the leading departments in nuclear science and engineering. It’s sad to see that such a prestigious institution is allowing its economic self interest to cloud its vision of the future.

What do you see in the future of relying on renewables? Is it economically feasible to build excess renewable capacity that can supply enough storage to run the system the rest of the day? How would the costs of this system compare to nuclear power at actual current costs? Will advanced nuclear power drop costs by 50%? Let us know your thoughts and add any useful references.

Our responsibility to our children

UN-CLIMATE-ENVIRONMENT-GRETA THUNBERG

Greta Thunberg’s speech at the UN has sparked a discussion about our deeper responsibilities to our future generations. When we made the huge effort to fight World War II, did we ask “how much will this cost?” We face the same existential threat and should make the same commitment. We can do this cost effectively, and avoid making most stupid decisions, but asking whether this effort is worth it is now beyond question. We will have to consider how to compensate those who have invested their money or their livelihoods in activities that we now recognize as damaging to the climate, and that will be an added cost to the rest of us. (And we may see this as unfair.) But we really have no choice.

J. Frank Bullit posted on “Fox and Hounds” a sentiment that reflects the core of opposition to such actions:

What if the alarmists are wrong, yet there is no counter to the demands of enacting economic and energy policies we might regret?”

So our energy costs might be a bit more than it would have otherwise, but we get a cleaner environment in exchange. And even now, renewable energy sources are competing well on a dollar to dollar basis.

On the other hand, if the “alarmists” are correct, the consequences have a significant probability of being catastrophic to our civilization, as well as our environment. We all have insurance on our houses for events that we see as highly unlikely. We pay that extra cost on our house to gain assurance that we will recover our investments if such unlikely events occur. These are costs that we are willing to accept because we know that the “alarmists” have a point about the risks of house fires. We should be taking the same attitude towards climate change assessments. It’s not possible to prove that there is no risk, or even that the risk is tiny. And the data trends are sufficiently consistent with the forecasts to date that the probabilities weigh more towards a likelihood than not.

Unless opponents can show that the consequences of the alarmists being wrong are worse than the climate change threat, we have to act to mitigate that risk in much the same way as we do when we buy house insurance. (And by the way, we don’t have another “house” to move to…)