Category Archives: Energy innovation

Emerging technologies and institutional change to meet new challenges while satisfying consumer tastes

Electric vehicles as the next smartphone

In 2006 a cell phone was portable phone that could send text messages. It was convenient but not transformative. No one seriously thought about dropping their landlines.

And then the iPhone arrived. Almost overnight consumers began to use it like their computer. They emailed, took pictures and sent them to their friends, then searched the web, then played complex games and watched videos. Social media exploded and multiple means of communicating and sharing proliferated. Landlines (and cable) started to disappear, and personal computer sales slowed. (And as a funny side effect, the younger generation seemed to quit talking on the phone.) The cell phone went from a means of one-on-one communication to a multi-faceted electronic tool that has become our pocket computer.

The U.S. population owning a smartphone has gone from 35% to 85% in the last decade. We could achieve similar penetration rates for electric vehicles (EVs) if we rethink and repackage how we market EVs to become our indispensable “energy management tool.” EVs can offer much more than conventional cars and we need to facilitate and market these advantages to sell them much faster.

EV pickups with spectacular features are about to be offered. These EVs may be a game changer for a different reason than what those focused on transportation policy think of–they offer households the opportunity for near complete energy independence. These pick ups have both enough storage capacity to power a house for several days and are designed to supply power to many other uses, not just driving. Combined with solar panels installed both at home and in business lots, the trucks can carry energy back and forth between locations. This has an added benefit of increasing reliability (local distribution outages are 15 times more likely than system levels ones) and resilience in the face of increasing extreme events.

This all can happen because cars are parked 90-95% of the time. That offers power source reliability in the same range as conventional generation, and the dispersion created by a portfolio of smaller sources further enhances that availability. Another important fact is that the total power capacity for autos on California’s road is over 2,000 gigawatts. Compared to California’s peak load of about 63 gigawatts, this is more than 30 times more capacity than we need. If we simply get to 20% penetration of EVs of which half have interconnective control abilities, we’ll have three times more capacity than we would need to meet our highest demands. There are other energy management issues, but solving them are feasible when we realize there will not be a real physical constraint.

Further, used EV batteries can be used as stationary storage, either in home or at renewable generation to mitigate transmission investments. EVs can transport energy between work and home from solar panels.

The difference between these EVs and the current models is akin to the difference between flip phones and smart phones. One is a single function device and the we use the latter to manage our lives. The marketing of EVs should shift course to emphasize these added benefits that are not possible with a conventional vehicle. The barriers are not technological, but only regulatory (from battery warranties and utility interconnection rules).

As part of this EV marketing focus, automakers should follow two strategies, both drawn from smart phones. The first is that EV pick ups should be leased as a means of keeping model features current. It facilitates rolling out industry standards quickly (like installing the latest Android update) and adding other yet-more attractive features. It also allows for more environmentally-friendly disposal of obsolete EVs. Materials can be more easily recycled and batteries no longer usable for driving (generally below 70% capacity) can be repurposed for stand-alone storage.

The second is to offer add on services. Smart phone companies have media streaming, data management and all sorts of other features beyond simple communication. Automakers can offer demand management to lower, or even eliminate, utility bills and appliance and space conditioning management placed onboard so a homeowner need not install a separate system that is not easily updated.

Part 2: A response to “Is Rooftop Solar Just Like Energy Efficiency?”

Severin Borenstein at the Energy Institute at Haas has written another blog post asserting that solar rooftop rates are inefficient and must changed radically. (I previously responded to an earlier post.) When looking at the efficiency of NEM rates, we need to look carefully at several elements of electricity market and the overall efficiency of utility ratemaking. We can see that we can come to a very different conclusion.

I filed testimony in the NEM 3.0 rulemaking last month where I calculated the incremental cost of transmission investment for new generation and the reduction in the CAISO peak load that looks to be attributable to solar rooftop.

  • Using FERC Form 1 and CEC powerplant data, I calculated that the incremental cost of transmission is $37/MWH. (And this is conservative due to a couple of assumptions I made.) Interestingly, I had done a similar calculation for AEP in the PJM interconnect and also came up with $37/MWH. This seems to be a robust value in the right neighborhood.
  • Load growth in California took a distinct change in trend in 2006 just as solar rooftop installations gained momentum. I found a 0.93 correlation between this change in trend and the amount of rooftop capacity installed. Using a simple trend, I calculated that the CAISO load decreased 6,000 MW with installation of 9,000 MW of rooftop solar. Looking at the 2005 CEC IEPR forecast, the peak reduction could be as large as 11,000 MW. CAISO also estimated in 2018 that rooftop solar displaced in $2.6 billion in transmission investment.

When we look at the utilities’ cost to acquire renewables and add in the cost of transmission, we see that the claim that grid-scale solar is so much cheaper than residential rooftop isn’t valid. The “green” market price benchmark used to set the PCIA shows that the average new RPS contract price in 2016 was still $92/MWH in 2016 and $74/MWH in 2017. These prices generally were for 30 year contracts, so the appropriate metric for comparing a NEM investment is against the vintage of RPS contracts signed in the year the rooftop project was installed. For 2016, adding in the transmission cost of $37/MWH, the comparable value is $129/MWH and in 2017, $111/MWH. In 2016, the average retail rates were $149/MWH for SCE, $183/MWH for PG&E and $205/MWH for SDG&E. (Note that PG&E’s rate had jumped $20/MWH in 2 years, while SCE’s had fallen $20/MWH.) In a “rough justice” way, the value of the displaced energy via rooftop solar was comparable to the retail rates which reflect the value of power to a customer, at least for NEM 1.0 and 2.0 customers. Rooftop solar was not “multiples” of grid scale solar.

These customers also took on investment risk. I calculated the payback period for a couple of customers around 2016 and found that a positive payback was dependent on utility rates rising at least 3% a year. This was not a foregone conclusion at the time because retail rates had actually be falling up to 2013 and new RPS contract prices were falling as well. No one was proposing to guarantee that these customers recover their investments if they made a mistake. That they are now instead benefiting is unwarranted hubris that ignores the flip side of the importance of investment risk–that investors who make a good efficient decision should reap the benefits. (We can discuss whether the magnitude of those benefits are fully warranted, but that’s a different one about distribution of income and wealth, not efficiency.)

Claiming that grid costs are fixed immutable amount simply isn’t a valid claim. SCE has been trying unsuccessfully to enact a “grid charge” with this claim since at least 2006. The intervening parties have successfully shown that grid costs in fact are responsive to reductions in demand. In addition, moving to a grid charge that creates a “ratchet effect” in revenue requirements where once a utility puts infrastructure in place, it faces no risk for poor investment decisions. On the other hand the utility can place its costs into ratebase and raise rates, which then raises the ratchet level on the fixed charge. One of the most important elements of a market economy that leads to efficient investment is that investors face the risk of not earning a return on an investment. That forces them to make prudent decisions. A “ratcheted” grid charge removes this risk even further for utilities. If we’re claiming that we are creating an “efficient” pricing policy, then we need to consider all sides of the equation.

The point that 50% of rooftop solar generation is used to offset internal use is important–while it may not be exactly like energy efficiency, it does have the most critical element of energy efficiency. That there are additional requirements to implement this is of second order importance, Otherwise we would think of demand response that uses dispatch controls as similarly distinct from EE. Those programs also require additional equipment and different rates. But in fact we sum those energy savings with LED bulbs and refrigerators.

An important element of the remaining 50% that is exported is that almost all of it is absorbed by neighboring houses and businesses on the same local circuit. Little of the power goes past the transformer at the top of the circuit. The primary voltage and transmission systems are largely unused. The excess capacity that remains on the system is now available for other customers to use. Whether investors should be able to recover their investment at the same annual rate in the face of excess capacity is an important question–in a competitive industry, the effective recovery rate would slow.

Finally, public purpose program (PPP) and wildfire mitigation costs are special cases that can be simply rolled up with other utility costs.

  • The majority of PPP charges are a form of a tax intended for income redistribution. That function is admirable, but it shows the standard problem of relying on a form of a sales tax to finance such programs. A sales tax discourages purchases which then reduces the revenues available for income transfers, which then forces an increase in the sales tax. It’s time to stop financing the CARE and FERA programs from utility rates.
  • Wildfire costs are created by a very specific subclass of customers who live in certain rural and wildlands-urban interface (WUI) areas. Those customers already received largely subsidized line extensions to install service and now we are unwilling to charge them the full cost of protecting their buildings. Once the state made the decision to socialize those costs instead, the costs became the responsibility of everyone, not just electricity customers. That means that these costs should be financed through taxes, not rates.

Again, if we are trying to make efficient policy, we need to look at the whole. It is is inefficient to finance these public costs through rates and it is incorrect to assert that there is an inefficient subsidy created if a set of customers are avoiding paying these rate components.

Part 1: A response to “Rooftop Solar Inequity”

Severin Borenstein at the Energy Institure at Haas has plunged into the politics of devising policies for rooftop solar systems. I respond to two of his blog posts in two parts here, with Part 1 today. I’ll start by posting a link to my earlier blog post that addresses many of the assertions here in detail. And I respond to to several other additional issues here.

First, the claims of rooftop solar subsidies has two fallacious premises. First, it double counts the stranded cost charge from poor portfolio procurement and management I reference above and discussed at greater length in my blog post. Take out that cost and the “subsidy” falls substantially. The second is that solar hasn’t displaced load growth. In reality utility loads and peak demand have been flat since 2006 and even declining over the last three years. Even the peak last August was 3,000 MW below the record in 2017 which in turn was only a few hundred MW above the 2006 peak. Rooftop solar has been a significant contributor to this decline. Displaced load means displaced distribution investment and gas fired generation (even though the IOUs have justified several billion in added investment by forecasted “growth” that didn’t materialized.) I have documented those phantom load growth forecasts in testimony at the CPUC since 2009. The cost of service studies supposedly showing these subsidies assume a static world in which nothing has changed with the introduction of rooftop solar. Of course nothing could be further from the truth.

Second TURN and Cal Advocates have all be pushing against decentralization of the grid for decades back to restructuring. Decentralization means that the forums at the CPUC become less important and their influence declines. They have all fought against CCAs for the same reason. They’ve been fighting solar rooftops almost since its inception as well. Yet they have failed to push for the incentives enacted in AB57 for the IOUs to manage their portfolios or to control the exorbitant contract terms and overabundance of early renewable contracts signed by the IOUs that is the primary reason for the exorbitant growth in rates.

Finally, there are many self citations to studies and others with the claim that the authors have no financial interest. E3 has significant financial interests in studies paid for by utilities, including the California IOUs. While they do many good studies, they also have produced studies with certain key shadings of assumptions that support IOUs’ positions. As for studies from the CPUC, commissioners frequently direct the expected outcome of these. The results from the Customer Choice Green Book in 2018 is a case in point. The CPUC knows where it’s political interests are and acts to satisfy those interests. (I have personally witnessed this first hand while being in the room.) Unfortunately many of the academic studies I see on these cost allocation issues don’t accurately reflect the various financial and regulatory arrangements and have misleading or incorrect findings. This happens simply because academics aren’t involved in the “dirty” process of ratemaking and can’t know these things from a distance. (The best academic studies are those done by those who worked in the bowels of those agencies and then went to academics.)

We are at a point where we can start seeing the additional benefits of decentralized energy resources. The most important may be the resilience to be gained by integrating DERs with EVs to ride out local distribution outages (which are 15 times more likely to occur than generation and transmission outages) once the utilities agree to enable this technology that already exists. Another may be the erosion of the political power wielded by large centralized corporate interests. (There was a recent paper showing how increasing market concentration has led to large wealth transfers to corporate shareholders since 1980.) And this debate has highlighted the elephant in the room–how utility shareholders have escaped cost responsibility for decades which has led to our expensive, wasteful system. We need to be asking this fundamental question–where is the shareholders’ skin in this game? “Obligation to serve” isn’t a blank check.

Transmission: the hidden cost of generation

The cost of transmission for new generation has become a more salient issue. The CAISO found that distributed generation (DG) had displaced $2.6 billion in transmission investment by 2018. The value of displacing transmission requirements can be determined from the utilities’ filings with FERC and the accounting for new power plant capacity. Using similar methodologies for calculating this cost in California and Kentucky, the incremental cost in both independent system operators (ISO) is $37 per megawatt-hour or 3.7 cents per kilowatt-hour in both areas. This added cost about doubles the cost of utility-scale renewables compared to distributed generation.

When solar rooftop displaces utility generation, particularly during peak load periods, it also displaces the associated transmission that interconnects the plant and transmits that power to the local grid. And because power plants compete with each other for space on the transmission grid, the reduction in bulk power generation opens up that grid to send power from other plants to other customers.

The incremental cost of new transmission is determined by the installation of new generation capacity as transmission delivers power to substations before it is then distributed to customers. This incremental cost represents the long-term value of displaced transmission. This amount should be used to calculate the net benefits for net energy metered (NEM) customers who avoid the need for additional transmission investment by providing local resources rather than remote bulk generation when setting rates for rooftop solar in the NEM tariff.

  • In California, transmission investment additions were collected from the FERC Form 1 filings for 2017 to 2020 for PG&E, SCE and SDG&E. The Wholesale Base Total Revenue Requirements submitted to FERC were collected for the three utilities for the same period. The average fixed charge rate for the Wholesale Base Total Revenue Requirements was 12.1% over that year. That fixed charge rate is applied to the average of the transmission additions to determine the average incremental revenue requirements for new transmission for the period. The plant capacity installed in California for 2017 to 2020 is calculated from the California Energy Commission’s “Annual Generation – Plant Unit”. (This metric is conservative because (1) it includes the entire state while CAISO serves only 80% of the state’s load and the three utilities serve a subset of that, and (2) the list of “new” plants includes a number of repowered natural gas plants at sites with already existing transmission. A more refined analysis would find an even higher incremental transmission cost.)

Based on this analysis, the appropriate marginal transmission cost is $171.17 per kilowatt-year. Applying the average CAISO load factor of 52%, the marginal cost equals $37.54 per megawatt-hour.

  • In Kentucky, Kentucky Power is owned by American Electric Power (AEP) which operates in the PJM ISO. PJM has a market in financial transmission rights (FTR) that values relieving the congestion on the grid in the short term. AEP files network service rates each year with PJM and FERC. The rate more than doubled over 2018 to 2021 at average annual increase of 26%.

Based on the addition of 22,907 megawatts of generation capacity in PJM over that period, the incremental cost of transmission was $196 per kilowatt-year or nearly four times the current AEP transmission rate. This equates to about $37 per megawatt-hour (or 3.7 cents per kilowatt-hour).

Outages highlight the need for a fundamental revision of grid planning

The salience of outages due to distribution problems such as occurred with record heat in the Pacific Northwest and California’s public safety power shutoffs (PSPS) highlights a need for a change in perspective on addressing reliability. In California, customers are 15 times more likely to experience an outage due to distribution issues rather than generation (well, really transmission outages as August 2020 was the first time that California experienced a true generation shortage requiring imposed rolling blackouts—withholding in 2001 doesn’t count.) Even the widespread blackouts in Texas in February 2021 are attributable in large part due to problems beyond just a generation shortage.

Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.

This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.

We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.

A new agricultural electricity use forecast method holds promise for water use management

Agricultural electricity demand is highly sensitive to water availability. Under “normal” conditions, the State Water Project (SWP) and Central Valley Project (CVP), as well as other surface water supplies, are key sources of irrigation water for many California farmers. Under dry conditions, these water sources can be sharply curtailed, even eliminated, at the same time irrigation requirements are heightened. Farmers then must rely more heavily on groundwater, which requires greater energy to pump than surface water, since groundwater must be lifted from deeper depths.

Over extended droughts, like between 2012 to 2016, groundwater levels decline, and must be pumped from ever deeper depths, requiring even more energy to meet crops’ water needs. As a result, even as land is fallowed in response to water scarcity, significantly more energy is required to water remaining crops and livestock. Much less pumping is necessary in years with ample surface water supply, as rivers rise, soils become saturated, and aquifers recharge, raising groundwater levels.

The surface-groundwater dynamic results in significant variations in year-to-year agricultural electricity sales. Yet, PG&E has assigned the agricultural customer class a revenue responsibility based on the assumption that “normal” water conditions will prevail every year, without accounting for how inevitable variations from these circumstances will affect rates and revenues for agricultural and other customers.

This assumption results in an imbalance in revenue collection from the agricultural class that does not correct itself even over long time periods, harming agricultural customers most in drought years, when they can least afford it. Analysis presented presented by M.Cubed on behalf of the Agricultural Energy Consumers Association (AECA) in the 2017 PG&E General Rate Case (GRC) demonstrated that overcollections can be expected to exceed $170 million over two years of typical drought conditions, with the expected overcollection $34 million in a two year period. This collection imbalance also increases rate instability for other customer classes.

Figure-1 compares the difference between forecasted loads for agriculture and system-wide used to set rates in the annual ERRA Forecast proceedings (and in GRC Phase 2 every three years) and the actual recorded sales for 1995 to 2019. Notably, the single largest forecasting error for system-wide load was a sales overestimate of 4.5% in 2000 and a shortfall in 2019 of 3.7%, while agricultural mis-forecasts range from an under-forecast of 39.2% in the midst of an extended drought in 2013 to an over-forecast of 18.2% in one of the wettest years on record in 1998. Load volatility in the agricultural sector is extreme in comparison to other customer classes.

Figure-2 shows the cumulative error caused by inadequate treatment of agricultural load volatility over the last 25 years. An unbiased forecasting approach would reflect a cumulative error of zero over time. The error in PG&E’s system-wide forecast has largely balanced out, even though the utility’s load pattern has shifted from significant growth over the first 10 years to stagnation and even decline. PG&E apparently has been able to adapt its forecasting methods for other classes relatively well over time.

The accumulated error for agricultural sales forecasting tells a different story. Over a quarter century the cumulative error reached 182%, nearly twice the annual sales for the Agricultural class. This cumulative error has consequences for the relative share of revenue collected from agricultural customers compared to other customers, with growers significantly overpaying during the period.

Agricultural load forecasting can be revised to better address how variations in water supply availability drive agricultural load. Most importantly, the final forecast should be constructed from a weighted average of forecasted loads under normal, wet and dry conditions. The forecast of agricultural accounts also must be revamped to include these elements. In addition, the load forecast should include the influence of rates and a publicly available data source on agricultural income such as that provided by the USDA’s Economic Research Service.

The Forecast Model Can Use An Additional Drought Indicator and Forecasted Agricultural Rates to Improve Its Forecast Accuracy

The more direct relationship to determine agricultural class energy needs is between the allocation of surface water via state and federal water projects and the need to pump groundwater when adequate surface water is not available from the SWP and federal CVP. The SWP and CVP are critical to California agriculture because little precipitation falls during the state’s Mediterranean-climate summer and snow-melt runoff must be stored and delivered via aqueducts and canals. Surface water availability, therefore, is the primary determinant of agricultural energy use, while precipitation and related factors, such as drought, are secondary causes in that they are only partially responsible for surface water availability. Other factors such as state and federal fishery protections substantially restrict water availability and project pumping operations greatly limiting surface water deliveries to San Joaquin Valley farms.

We found that the Palmer Drought Stress Index (PDSI) is highly correlated with contract allocations for deliveries through the SWP and CVP, reaching 0.78 for both of them, as shown in Figure AECA-3. (Note that the correlation between the current and lagged PDSI is only 0.34, which indicates that both variables can be included in the regression model.) Of even greater interest and relevance to PG&E’s forecasting approach, the correlation with the previous year’s PDSI and project water deliveries is almost as strong, 0.56 for the SWP and 0.53 for the CVP. This relationship can be seen also in Figure-3, as the PDSI line appears to lead changes in the project water deliveries. This strong relationship with this lagged indicator is not surprising, as both the California Department of Water Resources and U.S. Bureau of Reclamation account for remaining storage and streamflow that is a function of soil moisture and aquifers in the Sierras.

Further, comparing the inverse of water delivery allocations, (i.e., the undelivered contract shares), to the annual agricultural sales, we can see how agricultural load has risen since 1995 as the contract allocations delivered have fallen (i.e., the undelivered amount has risen) as shown in Figure-4. The decline in the contract allocations is only partially related to the amount of precipitation and runoff available. In 2017, which was among the wettest years on record, SWP Contractors only received 85% of their allocations, while the SWP provided 100% every year from 1996 to 1999. The CVP has reached a 100% allocation only once since 2006, while it regularly delivered above 90% prior to 2000. Changes in contract allocations dictated by regulatory actions are clearly a strong driver in the growth of agricultural pumping loads but an ongoing drought appears to be key here. The combination of the forecasted PDSI and the lagged PDSI of the just concluded water year can be used to capture this relationship.

Finally, a “normal” water year rarely occurs, occurring in only 20% of the last 40 years. Over time, the best representation of both surface water availability and the electrical load dependent on it is a weighted average across the probabilities of different water year conditions.

Proposed Revised Agricultural Forecast

We prepared a new agricultural load forecast for 2021 implementing the changes recommended herein. In addition, the forecasted average agricultural rate was added, which was revealed to be statistically valid. The account forecast was developed using most of the same variables as for the sales forecast to reflect similarities in drivers of both sales and accounts.

Figure-5 compares the performance of AECA’s proposed model to PG&E’s model filed in its 2021 General Rate Case. The backcasted values from the AECA model have a correlation coefficient of 0.973 with recorded values,[1] while PG&E’s sales forecast methodology only has a correlation of 0.742.[2] Unlike PG&E’s model almost all of the parameter estimates are statistically valid at the 99% confidence interval, with only summer and fall rainfall being insignificant.[3]

AECA’s accounts forecast model reflects similar performance, with a correlation of 0.976. The backcast and recorded data are compared in Figure-6. For water managers, this chart shows how new groundwater wells are driven by a combination of factors such as water conditions and electricity prices.




Why are real-time electricity retail rates no longer important in California?

The California Public Utilities Commission (CPUC) has been looking at whether and how to apply real-time electricity prices in several utility rate applications. “Real time pricing” involves directly linking the bulk wholesale market price from an exchange such as the California Independent System Operator (CAISO) to the hourly retail price paid by customers. Other charges such as for distribution and public purpose programs are added to this cost to reach the full retail rate. In Texas, many retail customers have their rates tied directly or indirectly to the ERCOT system market that operates in a manner similar to CAISO’s. A number of economists have been pushing for this change as a key solution to managing California’s reliability issues. Unfortunately, the moment may have passed where this can have a meaningful impact.

In California, the bulk power market costs are less than 20% of the total residential rate. Even if we throw in the average capacity prices, it only reaches 25%. In addition, California has a few needle peaks a year compared to the much flatter, longer, more frequent near peak loads in the East due to the differences in humidity. The CAISO market can go years without real price deviations that are consequential on bills. For example, PG&E’s system average rate is almost 24 cents per kilowatt-hour (and residential is even higher). Yet, the average price in the CAISO market has remained at 3 to 4 cents per kilowatt-hour since 2001, and the cost of capacity has actually fallen to about 2 cents. Even a sustained period of high prices such as occurred last August will increase the average price by less than a penny–that’s less than 5% of the total rate. The story in 2005 was different, when this concept was first offered with an average rate of 13 cents per kilowatt-hour (and that was after the 4 cent adder from the energy crisis). In other words, the “variable” component just isn’t important enough to make a real difference.

Ahmad Faruqui who has been a long time advocate for dynamic retail pricing wrote in a LinkedIn comment:

“Airlines, hotels, car rentals, movie theaters, sporting events — all use time-varying rates. Even the simple parking meter has a TOU rate embedded in it.”

It’s true that these prices vary with time, and electricity prices are headed that way if not there already. Yet these industries don’t have prices that change instantly with changes in demand and resource availability–the prices are often set months ahead based on expectations of supply and demand, much as traditional electricity TOU rates are set already. Additionally, in all of these industries , the price variations are substantially less than 100%. But for electricity, when the dynamic price changes are important, they can be up to 1,000%. I doubt any of these industries would use pricing variations that large for practical reasons.

Rather than pointing out that this tool is available and some types of these being used elsewhere, we should be asking why the tool isn’t being used? What’s so different about electricity and are we making the right comparisons?

Instead, we might look at a different package to incorporate customer resources and load dynamism based on what has worked so far.

  • First is to have TOU pricing with predictable patterns. California largely already has this in place, and many customer groups have shown how they respond to this signal. In the Statewide Pilot on critical peak period price, the bulk of the load shifting occurred due to the implementation of a base TOU rate, and the CPP effect was relatively smaller.
  • Second, to enable more distributed energy resources (DER) is to have fixed price contracts akin to generation PPAs. Everyone understands the terms of the contracts then instead of the implicit arrangement of net energy metering (NEM) that is very unsatisfactory for everyone now. It also means that we have to get away from the mistaken belief that short-run prices or marginal costs represent “market value” for electricity assets.
  • Third for managing load we should have robust demand management/response programs that target the truly manageable loads, and we should compensate customers based on the full avoided costs created.

Can Net Metering Reform Fix the Rooftop Solar Cost Shift?: A Response

A response to Severin Borenstein’s post at UC Energy Institute where he posits a large subsidy flowing to NEM customers and proposes an income-based fixed charge as the remedy. Borenstein made the same proposal at a later CPUC hearing.

The CPUC is now considering reforming the current net energy metering (NEM) tariffs in the NEM 3.0 proceeding. And the State Legislature is considering imposing a change by fiat in AB 1139.

First, to frame this discussion, economists are universally guilty of status quo bias in which we (since I’m one) too often assume that changing from the current physical and institutional arrangement is a “cost” in an implicit assumption that the current situation was somehow arrived at via a relatively benign economic process. (The debate over reparations for slavery revolve around this issue.) The same is true for those who claim that NEM customers are imposing exorbitant costs on other customers.

There are several issues to be considered in this analysis.

1) In looking at the history of the NEM rate, the emergence of a misalignment between retail rates that compensate solar customers and the true marginal costs of providing service (which are much more than the hourly wholesales price–more on that later) is a recent event. When NEM 1.0 was established residential rates were on the order of 15 c/kWh and renewable power contracts were being signed at 12 to 15 c/kWh. In addition, the transmission costs were adding 2 to 4 c/kWh. This was the case through 2015; NEM 1.0 expired in 2016. NEM 2.0 customers were put on TOU rates with evening peak loads, so their daytime output is being priced at off peak rates midday and they are paying higher on peak rates for usage. This despite the fact that the difference in “marginal costs” between peak and off wholesale costs are generally on the order of a penny per kWh. (PG&E NEM customers also pay a $10/month fixed charge that is close to the service connection cost.) Calculating the net financial flows is more complicated and deserve that complex look than what can be captured in a simple back of the envelope calculation.

2) If we’re going to dig into subsidies, the first place to start is with utility and power plant shareholders. If we use the current set of “market price benchmarks” (which are problematic as I’ll discuss), out of PG&E’s $5.2 billion annual generation costs, over $2 billion or 40% are “stranded costs” that are subsidies to shareholders for bad investments. In an efficient marketplace those shareholders would have to recover those costs through competitively set prices, as Jim Lazar of the Regulatory Assistance Project has pointed out. One might counter those long term contracts were signed on behalf of these customers who now must pay for them. Of course, overlooking whether those contracts were really properly evaluated, that’s also true for customers who have taken energy efficiency measures and Elon Musk as he moves to Texas–we aren’t discussing whether they also deserve a surcharge to cover these costs. But beyond this, on an equity basis, NEM 1.0 customers at least made investments based on an expectation, that the CPUC did not dissuade them of this belief (we have documentation of how at least one county government was mislead by PG&E on this issue in 2016). If IOUs are entitled to financial protection (and the CPUC has failed to enact the portfolio management incentive specified in AB57 in 2002) then so are those NEM customers. If on the other hand we can reopen cost recovery of those poor portfolio management decisions that have led to the incentive for retail customers to try to exit, THEN we can revisit those NEM investments. But until then, those NEM customers are no more subsidized than the shareholders.

3) What is the true “marginal cost”? First we have the problem of temporal consistency between generation vs. transmission and distribution grid (T&D) costs. Economists love looking at generation because there’s a hourly (or subhourly) “short run” price that coincides nicely with economic theory and calculus. On the other hand, those darn T&D costs are lumpy and discontinuous. The “hourly” cost for T&D is basically zero and the annual cost is not a whole lot better. The current methods debated in the General Rate Cases (GRC) relies on aggregating piecemeal investments without looking at changing costs as a whole. Probably the most appropriate metric for T&D is to calculate the incremental change in total costs by the number of new customers. Given how fast utility rates have been rising over the last decade I’m pretty sure that the “marginal cost” per customer is higher than the average cost–in fact by definition marginal costs must be higher. (And with static and falling loads, I’m not even sure how we calculated the marginal costs per kwh. We can derive the marginal cost this way FERC Form 1 data.) So how do we meld one marginal cost that might be on a 5-minute basis with one that is on a multi-year timeframe? This isn’t an easy answer and “rough justice” can cut either way on what’s the truly appropriate approximation.

4) Even if the generation cost is measured sub hourly, the current wholesale markets are poor reflections of those costs. Significant market distortions prevent fully reflecting those costs. Unit commitment costs are often subsidized through out of market payments; reliability regulation forces investment that pushes capacity costs out of the hourly market, added incremental resources–whether for added load such as electrification or to meet regulatory requirements–are largely zero-operating cost renewables of which none rely on hourly market revenues for financial solvency; in California generators face little or no bankruptcy risk which allows them to underprice their bids; on the flip side, capacity price adders such as ERCOT’s ORDC overprices the value of reliability to customers as a backdoor way to allow generators to recover investments through the hourly market. So what is the true marginal cost of generation? Pulling down CAISO prices doesn’t look like a good primary source of data.

We’re left with the question of what is the appropriate benchmark for measuring a “subsidy”? Should we also include the other subsidies that created the problem in the first place?

AB1139 would undermine California’s efforts on climate change

Assembly Bill 1139 is offered as a supposed solution to unaffordable electricity rates for Californians. Unfortunately, the bill would undermine the state’s efforts to reduce greenhouse gas emissions by crippling several key initiatives that rely on wider deployment of rooftop solar and other distributed energy resources.

  • It will make complying with the Title 24 building code requiring solar panel on new houses prohibitively expensive. The new code pushes new houses to net zero electricity usage. AB 1139 would create a conflict with existing state laws and regulations.
  • The state’s initiative to increase housing and improve affordability will be dealt a blow if new homeowners have to pay for panels that won’t save them money.
  • It will make transportation electrification and the Governor’s executive order aiming for 100% new EVs by 2035 much more expensive because it will make it much less economic to use EVs for grid charging and will reduce the amount of direct solar panel charging.
  • Rooftop solar was installed as a long-term resource based on a contractual commitment by the utilities to maintain pricing terms for at least the life of the panels. Undermining that investment will undermine the incentive for consumers to participate in any state-directed conservation program to reduce energy or water use.

If the State Legislature wants to reduce ratepayer costs by revising contractual agreements, the more direct solution is to direct renegotiation of RPS PPAs. For PG&E, these contracts represent more than $1 billion a year in excess costs, which dwarfs any of the actual, if any, subsidies to NEM customers. The fact is that solar rooftops displaced the very expensive renewables that the IOUs signed, and probably led to a cancellation of auctions around 2015 that would have just further encumbered us.

The bill would force net energy metered (NEM) customers to pay twice for their power, once for the solar panels and again for the poor portfolio management decisions by the utilities. The utilities claim that $3 billion is being transferred from customers without solar to NEM customers. In SDG&E’s service territory, the claim is that the subsidy costs other ratepayers $230 per year, which translates to $1,438 per year for each NEM customer. But based on an average usage of 500 kWh per month, that implies each NEM customer is receiving a subsidy of $0.24/kWh compared to an average rate of $0.27 per kWh. In simple terms, SDG&E is claiming that rooftop solar saves almost nothing in avoided energy purchases and system investment. This contrasts with the presumption that energy efficiency improvements save utilities in avoided energy purchases and system investments. The math only works if one agrees with the utilities’ premise that they are entitled to sell power to serve an entire customer’s demand–in other words, solar rooftops shouldn’t exist.

Finally, this initiative would squash a key motivator that has driven enthusiasm in the public for growing environmental awareness. The message from the state would be that we can only rely on corporate America to solve our climate problems and that we can no longer take individual responsibility. That may be the biggest threat to achieving our climate management goals.

ERCOT has the peak period scarcity price too high

The freeze and resulting rolling outages in Texas in February highlighted the unique structure of the power market there. Customers and businesses were left with huge bills that have little to do with actual generation expenses. This is a consequence of the attempt by Texas to fit into an arcane interpretation of an economic principle where generators should be able to recover their investments from sales in just a few hours of the year. Problem is that basic of accounting for those cashflows does not match the true value of the power in those hours.

The Electric Reliability Council of Texas (ERCOT) runs an unusual wholesale electricity market that supposedly relies solely on hourly energy prices to provide the incentives for incenting new generation investment. However, ERCOT is using the same type of administratively-set subsidies to create enough potential revenue to cover investment costs. Further, a closer examination reveals that this price adder is set too high relative to actual consumer value for peak load power. All of this leads to a conclusion relying solely on short-run hourly prices as a proxy for the market value that accrues to new entrants is a misplaced metric.

The total ERCOT market first relies on side payments to cover commitment costs (which creates barriers to entry but that’s a separate issue) and second, it transfers consumer value through to the Operating Reserve Demand Curve (ORDC) that uses a fixed value of lost load (VOLL) in an arbitrary manner to create “opportunity costs” (more on that definition at a later time) so the market can have sufficient scarcity rents. This second price adder is at the core of ERCOT’s incentive system–energy prices alone are insufficient to support new generation investment. Yet ERCOT has ignored basic economics and set this value too high based on both available alternatives to consumers and basic regional budget constraints.

I started with an estimate of the number of hours where prices need the ORDC to be at full VOLL of $9000/MWH to recover the annual revenue requirements of combustion turbine (CT) investment based on the parameters we collected for the California Energy Commission. It turns out to be about 20 to 30 hours per year. Even if the cost in Texas is 30% less, this is still more 15 hours annually, every single year or on average. (That has not been happening in Texas to date.) Note for other independent system operators (ISO) such as the California ISO (CAISO), the price cap is $1,000 to $2,000/MWH.

I then calculated the cost of a customer instead using a home generator to meet load during those hours assuming a life of 10 to 20 years on the generator. That cost should set a cap on the VOLL to residential customers as the opportunity cost for them. The average unit is about $200/kW and an expensive one is about $500/kW. That cost ranges from $3 to $5 per kWh or $3,000 to $5,000/MWH. (If storage becomes more prevalent, this cost will drop significantly.) And that’s for customers who care about periodic outages–most just ride out a distribution system outage of a few hours with no backup. (Of course if I experienced 20 hours a year of outage, I would get a generator too.) This calculation ignores the added value of using the generator for other distribution system outages created by events like a hurricane hitting every few years, as happens in Texas. That drives down this cost even further, making the $9,000/MWH ORDC adder appear even more distorted.

The second calculation I did was to look at the cost of an extended outage. I used the outages during Hurricane Harvey in 2017 as a useful benchmark event. Based on ERCOT and U.S. Energy Information Reports reports, it looks like 1.67 million customers were without power for 4.5 days. Using the Texas gross state product (GSP) of $1.9 trillion as reported by the St. Louis Federal Reserve Bank, I calculated the economic value lost over 4.5 days, assuming a 100% loss, at $1.5 billion. If we assume that the electricity outage is 100% responsible for that loss, the lost economic value per MWH is just under $5,000/MWH. This represents the budget constraint on willingness to pay to avoid an outage. In other words, the Texas economy can’t afford to pay $9,000/MWH.

The recent set of rolling blackouts in Texas provides another opportunity to update this budget constraint calculation in a different circumstance. This can be done by determining the reduction in electricity sales and the decrease in state gross product in the period.

Using two independent methods, I come up with an upper bound of $5,000/MWH, and likely much less. One commentator pointed out that ERCOT would not be able achieve a sufficient planning reserve level at this price, but that statement is based on the premises that short-run hourly prices reflect full market values and will deliver the “optimal” resource mix. Neither is true.

This type of hourly pricing overemphasizes peak load reliability value and undervalues other attributes such as sustainability and resilience. These prices do not reflect the full incremental cost of adding new resources that deliver additional benefits during non-peak periods such as green energy, nor the true opportunity cost that is exercised when a generator is interconnected rather than during later operations. Texas has overbuilt its fossil-fueled generation thanks to this paradigm. It needs an external market based on long-run incremental costs to achieve the necessary environmental goals.