Why wasn’t there a similar cry against bailing out PG&E in not one but TWO bankruptcies? Both PG&E and SCE have clearly relied on the belief that they deserve subsidies to continue staying in business. (SCE has ridden along behind PG&E in both cases to gain the spoils.) The focus needs to be on ALL players here if these types of subsidies are to be called out.
“(T)he reactions have largely been about how much subsidy rooftop solar companies in California need in order to stay in business.”
We are monitoring two very different sets of media then. I see much more about the ability of consumers to maintain an ability to gain a modicum of energy independence from large monopolies that compel that those consumers buy their service with no viable escape. I also see a reactions about how this will undermine directly our ability to reduce GHG emissions. This directly conflicts with the CEC’s Title 24 building standards that use rooftop solar to achieve net zero energy and electrification in new homes.
Yes, there are problems with the current compensation model for NEM customers, but we also need to recognize our commitments to customers who made investments believing they were doing the right thing. We need to acknowledge the savings that they created for all of us and the push they gave to lower technology costs. We need to recognize the full set of values that these customers provide and how the current electric market structure is too broken to properly compensate what we want customers to do next–to add more storage. Yet, the real first step is to start at the source of the problem–out of control utility costs that ratepayers are forced to bear entirely.
The saying goes “No good deed goes unpunished.” The California Public Utilities Commission seems to have taken that motto to heart recently, and stands ready to penalize yet another group of customers who answered the clarion call to help solve the state’s problems by radically altering the rules for solar rooftops. Here’s three case studies of recent CPUC actions that undermine incentives for customers to act in the future in response to state initiatives: (1) farmers who invested in response to price incentives, (2) communities that pursued renewables more assertively, and (3) customers who installed solar panels.
Agriculture: Farmers have responded to past time of use (TOU) rate incentives more consistently and enthusiastically than any other customer class. Instead of being rewarded for their consistency, their peak price periods shifted from the afternoon to the early evening. Growers face much more difficulty in avoiding pumping during that latter period.
Since TOU rates were introduced to agricultural customers in the late 1970s, growers have made significant operational changes in response to TOU differentials between peak and off-peak energy prices to minimize their on-peak consumption. These include significant investments in irrigation equipment, storage and conveyance infrastructure and labor deployment rescheduling. The results of these expenditures are illustrated in the figure below, which shows how agricultural loads compare with system-wide load on a peak summer weekday in 2015, contrasting hourly loads to the load at the coincident peak hour. Both the smaller and larger agricultural accounts perform better than a range of representative rate schedules. Most notably agriculture’s aggregate load shape on a summer weekday is inverted relative to system peak, i.e., the highest agricultural loads occur during the lowest system load periods, in contrast with other rate classes.
All other rate schedules shown in the graphic hit their annual peak on the same peak day within the then-applicable peak hours of noon to 6 p.m. In contrast, agriculture electricity demand is less than 80% of its annual peak during those high-load hours, with its daily peak falling outside the peak period. Agriculture’s avoidance of peak hours occurred during the summer agricultural growing season, which coincided with peak system demand—just as the Commission asked customers to do. The Commission could not ask for a better aggregate response to system needs; in contrast to the profiles for all of the other customer groups, agriculture has significantly contributed to shifting the peak to a lower cost evening period.
The significant changes in the peak period price timing and differential that the CPUC adopted increases uncertainty over whether large investments in high water-use efficiency microdrip systems – which typically cost $2,000 per acre–will be financially viable. Microdrip systems have been adopted widely by growers over the last several years—one recent study of tomato irrigation rates in Fresno County could not find any significant quantity of other types of irrigation systems. Such systems can be subject to blockages and leaks that are only detectable at start up in daylight. Growers were able to start overnight irrigation at 6 p.m. under the legacy TOU periods and avoid peak energy use. In addition, workers are able to end their day shortly after 6 p.m. and avoid nighttime accidents. Shifting that load out of the peak period will be much more difficult to do with the peak period ending after sunset.
Contrary to strong Commission direction to incent customers to avoid peak power usage, the shift in TOU periods has served to penalize, and reverse, the great strides the agricultural class has made benefiting the utility system over the last four decades.
Community choice aggregators: CCAs were created, among other reasons, to develop more renewable or “green” power. The state achieved its 2020 target of 33% in large part because of the efforts of CCAs fostered through offerings of 50% and 100% green power to retail customers. CCAs also have offered a range of innovative programs that go beyond the offerings of PG&E, SCE and SDG&E.
Nevertheless, the difficulty of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.
The portfolios of the incumbent utilities are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)
The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.
But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.
Solar net energy metered customers: Distributed solar generation installed under California’s net energy metering (NEM/NEMA) programs has mitigated and even eliminated load and demand growth in areas with established customers. This benefit supports protecting the investments that have been made by existing NEM/NEMA customers. Similarly, NEM/NEMA customers can displace investment in distribution assets. That distribution planners are not considering this impact appropriately is not an excuse for failing to value this benefit. For example, PG&E’s sales fell by 5% from 2010 to 2018 and other utilities had similar declines. Peak loads in the CAISO balancing authority reach their highest point in 2006 and the peak in August 2020 was 6% below that level.
Much of that decrease appears to have been driven by the installation of rooftop solar. The figure above illustrates the trends in CAISO peak loads in the set of top lines and the relationship to added NEM/NEMA installations in the lower corner. It also shows the CEC’s forecast from its 2005 Integrated Energy Policy Report as the top line. Prior to 2006, the CAISO peak was growing at annual rate of 0.97%; after 2006, peak loads have declined at a 0.28% trend. Over the same period, solar NEM capacity grew by over 9,200 megawatts. The correlation factor or “R-squared” between the decline in peak load after 2006 and the incremental NEM additions is 0.93, with 1.0 being perfect correlation. Based on these calculations, NEM capacity has deferred 6,500 megawatts of capacity additions over this period. Comparing the “extreme” 2020 peak to the average conditions load forecast from 2005, the load reduction is over 11,500 megawatts. The obvious conclusion is that these investments by NEM customers have saved all ratepayers both reliability and energy costs while delivering zero-carbon energy.
The CPUC now has before it a rulemaking in which the utilities and some ratepayer advocates are proposing to not only radically reduce the compensation to new NEM/NEMA customers but also to change the terms of the agreements for existing ones.
One of the key principles of providing financial stability is setting prices and rates for long-lived assets such as solar panels and generation plants at the economic value when the investment decision was made to reflect the full value of the assets that would have been acquired otherwise. If that new resource had not been built, either a ratebased generation asset would have been constructed by the utility at a cost that would have been recovered over a standard 30-year period or more likely, additional PPAs would have been signed. Additionally, the utilities’ investments and procurement costs are not subject to retroactive ratemaking under the rule prohibiting such ratemaking and Public Utilities Code Section 728, thus protecting shareholders from any risk of future changes in state or Commission policies.
Utility customers who similarly invest in generation should be afforded at least the same assurances as the utilities with respect to protection from future Commission decisions that may diminish the value of those investments. Moreover, customers do not have the additional assurances of achieving a certain net income so they already face higher risks than utility shareholders for their investments.
Generators are almost universally afforded the ability to recover capital investments based on prices set for multiple years, and often the economic life of their assets. Utilities are able to put investments in ratebase to be recovered at a fixed rate of return plus depreciation over several decades. Third-party generators are able to sign fixed price contracts for 10, 20, and even 40 years. Some merchant generators may choose to sell only into the short-term “hourly” market, but those plants are not committed to selling whenever the CAISO demands so. Generators are only required to do so when they sign a PPA with an assured payment toward investment recovery.
Ratepayers who make investments that benefit all ratepayers over the long term should be offered tariffs that provide a reasonable assurance of recovery of those investments, similar to the PPAs offered to generators. Ratepayers should be able to gain the same assurances as generators who sign long-term PPAs, or even utilities that ratebase their generation assets, that they will not be forced to bear all of the risk of investing of clean self-generation. These ratepayers should have some assurance over the 20-plus year expected life of their generation investment.
Pacific Gas & Electric has proposed to underground 10,000 miles of distribution lines to reduce wildfire risk, at an estimated cost of $1.5 to $2 million per mile. Meanwhile PG&E has installed fast-trip circuit breakers in certain regions to mitigate fire risks from line shorts and breaks, but it has resulted in a vast increase in customer outages. CPUC President Batjer wrote in an October 25 letter to PG&E, “[s]ince PG&E initiated the Fast Trip setting practice on 11,500 miles of lines in High Fire Threat Districts in late July, it has caused over 500 unplanned power outages impacting over 560,000 customers.” She then ordered a series of compliance reports and steps. The question is whether undergrounding is the most cost-effective solution that can be implemented in a timely manner.
A viable alternative is microgrids, installed at either individual customers or community scale. The microgrids could be operated to island customers or communities during high risk periods or to provide backup when circuit breakers cut power. Customers could continue to be served outside of either those periods of risk or weather-caused outages.
Because microgrids would be installed solely for the purpose of displacing undergrounding, the relative costs should be compared without considering any other services such as energy delivered outside of periods of fire risk or outages or increased green power.
I previously analyzed this question, but this updated assessment uses new data and presents a threshold at which either undergrounding or microgrids is preferred depending on the range of relative costs.
We start with the estimates of undergrounding costs. Along with PG&E’s stated estimate, PG&E’s 2020 General Rate Case includes a settlement agreement with a cost of $4.8 million per mile. That leads to an estimate of $15 to $48 million. Adding in maintenance costs of about $400 million annually, this revenue requirement translates to a rate increase of 3.2 to 9.3 cents per kilowatt-hour.
For microgrid costs, the National Renewable Energy Laboratory published estimated ranges for both (1) commercial or community scale projects of 1 megawatt with 2.4 megawatt-hours of storage and (2) residential scale of 7 kilowatts with 20 kilowatt-hours of storage. For larger projects, NREL shows ranges of $2.07 to $2.13 million; we include an upper end estimate double of NREL’s top range. For residential; the range is $36,000 to $38,000.
Using this information, we can make comparisons based on the density of customers or energy use per mile of targeted distribution lines. In other words, we can determine if its more cost-effective to underground distribution lines or install microgrids based on how many customers or how much load is being served on a line.
As a benchmark, PG&E’s average system density per mile of distribution line is 50.6 customers and 166 kW (or 0.166 MW).
The table below shows the relative cost effectiveness for undergrounding compared to community/commercial microgrids. If the load density falls below the value shown, microgrids are more cost effective. Note that the average density across the PG&E service area is 0.166 MW which is below any of the thresholds. That indicates that such microgrids should be cost-effective in most rural areas.
The next table shows the relative cost effectiveness for individual residential microgrids, and again if the customer density falls below the threshold shown, then microgrids save more costs. The average density for service area is 51 customers per line-mile which reflects the concentration of population in the Bay Area. At the highest undergrounding costs, microgrids are almost universally favored. In rural areas where density falls below 30 customers per line-mile, microgrids are less costly at the lower undergrounding costs.
PG&E has installed two community-scalemicrogrids in remote locations so far, and reportedly considering 20 such projects. However, PG&E fell behind on those projects, prompting the CPUC to reopen its procurement process in its Emergency Reliability rulemaking. In addition, PG&E has relied heavily on natural gas generation for these.
PG&E simply may not have the capacity to construct either microgrids or install undergrounded lines in a timely manner solely through its organization. PG&E already is struggling to meet its targets for converting privately-owned mobilehome park utility systems to utility ownership. A likely better choice is to rely on local governments working in partnership with PG&E to identify the most vulnerable lines to construct and manage these microgrids. The residential microgrids would be operated remotely. The community microgrids could be run under several different models including either PG&E or municipal ownership.
Vibrant Clean Energy released a study showing that inclusion of large amounts of distributed energy resources (DERs) can lower the costs of achieving 100% renewable energy. Commentors here have criticized the study for several reasons, some with reference to the supposed economies of scale of the grid.
While economies of scale might hold for individual customers in the short run, the data I’ve been evaluating for the PG&E and SCE general rate cases aren’t necessarily consistent with that notion. I’ve already discussed here the analysis I conducted in both the CAISO and PJM systems that show marginal transmission costs that are twice the current transmission rates. The rapid rise in those rates over the last decade are consistent with this finding. If economies of scale did hold for the transmission network, those rates should be stable or falling.
On the distribution side, the added investment reported in those two utilities’ FERC Form 1 are not consistent with the marginal costs used in the GRC filings. For example the added investment reported in Form 1 for final service lines (transmission, services, meters or TSM) appears to be almost 10 times larger than what is implied by the marginal costs and new customers in the GRC filings. And again the average cost of distribution is rising while energy and peak loads have been flat across the CAISO area since 2006. The utilities have repeatedly asked for $2 billion each GRC for “growth” in distribution, but given the fact that load has been flat (and even declining in 2019 and 2020), that means there’s likely a significant amount of stranded distribution infrastructure. If that incremental investment is for replacement (which is not consistent with either their depreciation schedules or their assertions about the true life of their facilties and the replacement costs within their marginal cost estimates), then they are grossly underestimating the future replacement cost for facilities which means they are underestimating the true marginal costs.
I can see a future replacement liability right outside my window. The electric poles were installed by PG&E 60+ years ago and the poles are likely reaching the end of their lives. I can see the next step moving to undergrounding the lines at a cost of $15,000 to $25,000 per house based on the ongoing mobilehome conversion program and the typical Rule 20 undergrounding project. Deferring that cost is a valid DER value. We will have to replace many services over the next several decades. And that doesn’t address the higher voltage parts of the system.
We have a counterexample of a supposed monopoly in the cable/internet system. I have at least two competing options where I live. The cell phone network also turned out not to be a natural monopoly. In an area where the PG&E and Merced ID service territories overlap, there are parallel distribution systems. The claim of a “natural monopoly” more likely is a legal fiction that protects the incumbent utility and is simpler for local officials to manage when awarding franchises.
If the claim of natural monopolies in electricity were true, then the distribution rate components for SCE and PG&E should be much lower than for smaller munis such as Palo Alto or Alameda. But that’s not the case. The cost advantages for SMUD and Roseville are larger than can be simply explained by differences in cost of capital. The Division/Office of Ratepayer Advocates commissioned a study by Christensen Associates for PG&E’s 1999 GRC that showed that the optimal utility size was about 500,000 customers. (PG&E’s witness who was a professor at UC Berkeley inadvertently confirmed the results and Commissioner Richard Bilas, a Ph.D. economist, noted this in his proposed decision which was never adopted because it was short circuited by restructuring.) Given that finding, that means that the true marginal cost of a customer and associated infrastructure is higher than the average cost. The likely counterbalancing cause is an organizational diseconomy of scale that overwhelms the technological benefits of size.
Finally, generation no longer shows the economies of scale that dominated the industry. The modularity of combined cycle plants and the efficiency improvement of CTs started the industry down the rode toward the efficiency of “smallness.” Solar plants are similarly modular. The reason why additional solar generation appears so low cost is because much of that is from adding another set of panels to an existing plant while avoiding additional transmission interconnection costs (which is the lion’s share of the costs that create what economies of scale do exist.)
The VCE analysis looks a holistic long term analysis. It relies on long run marginal costs, not the short run MCs that will never converge on the LRMC due to the attributes of the electricity system as it is regulated. The study should be evaluated in that context.
Severin Borenstein at the Energy Institure at Haas has plunged into the politics of devising policies for rooftop solar systems. I respond to two of his blog posts in two parts here, with Part 1 today. I’ll start by posting a link to my earlier blog post that addresses many of the assertions here in detail. And I respond to to several other additional issues here.
First, the claims of rooftop solar subsidies has two fallacious premises. First, it double counts the stranded cost charge from poor portfolio procurement and management I reference above and discussed at greater length in my blog post. Take out that cost and the “subsidy” falls substantially. The second is that solar hasn’t displaced load growth. In reality utility loads and peak demand have been flat since 2006 and even declining over the last three years. Even the peak last August was 3,000 MW below the record in 2017 which in turn was only a few hundred MW above the 2006 peak. Rooftop solar has been a significant contributor to this decline. Displaced load means displaced distribution investment and gas fired generation (even though the IOUs have justified several billion in added investment by forecasted “growth” that didn’t materialized.) I have documented those phantom load growth forecasts in testimony at the CPUC since 2009. The cost of service studies supposedly showing these subsidies assume a static world in which nothing has changed with the introduction of rooftop solar. Of course nothing could be further from the truth.
Second TURN and Cal Advocates have all be pushing against decentralization of the grid for decades back to restructuring. Decentralization means that the forums at the CPUC become less important and their influence declines. They have all fought against CCAs for the same reason. They’ve been fighting solar rooftops almost since its inception as well. Yet they have failed to push for the incentives enacted in AB57 for the IOUs to manage their portfolios or to control the exorbitant contract terms and overabundance of early renewable contracts signed by the IOUs that is the primary reason for the exorbitant growth in rates.
Finally, there are many self citations to studies and others with the claim that the authors have no financial interest. E3 has significant financial interests in studies paid for by utilities, including the California IOUs. While they do many good studies, they also have produced studies with certain key shadings of assumptions that support IOUs’ positions. As for studies from the CPUC, commissioners frequently direct the expected outcome of these. The results from the Customer Choice Green Book in 2018 is a case in point. The CPUC knows where it’s political interests are and acts to satisfy those interests. (I have personally witnessed this first hand while being in the room.) Unfortunately many of the academic studies I see on these cost allocation issues don’t accurately reflect the various financial and regulatory arrangements and have misleading or incorrect findings. This happens simply because academics aren’t involved in the “dirty” process of ratemaking and can’t know these things from a distance. (The best academic studies are those done by those who worked in the bowels of those agencies and then went to academics.)
We are at a point where we can start seeing the additional benefits of decentralized energy resources. The most important may be the resilience to be gained by integrating DERs with EVs to ride out local distribution outages (which are 15 times more likely to occur than generation and transmission outages) once the utilities agree to enable this technology that already exists. Another may be the erosion of the political power wielded by large centralized corporate interests. (There was a recent paper showing how increasing market concentration has led to large wealth transfers to corporate shareholders since 1980.) And this debate has highlighted the elephant in the room–how utility shareholders have escaped cost responsibility for decades which has led to our expensive, wasteful system. We need to be asking this fundamental question–where is the shareholders’ skin in this game? “Obligation to serve” isn’t a blank check.
The cost of transmission for new generation has become a more salient issue. The CAISO found that distributed generation (DG) had displaced $2.6 billion in transmission investment by 2018. The value of displacing transmission requirements can be determined from the utilities’ filings with FERC and the accounting for new power plant capacity. Using similar methodologies for calculating this cost in California and Kentucky, the incremental cost in both independent system operators (ISO) is $37 per megawatt-hour or 3.7 cents per kilowatt-hour in both areas. This added cost about doubles the cost of utility-scale renewables compared to distributed generation.
When solar rooftop displaces utility generation, particularly during peak load periods, it also displaces the associated transmission that interconnects the plant and transmits that power to the local grid. And because power plants compete with each other for space on the transmission grid, the reduction in bulk power generation opens up that grid to send power from other plants to other customers.
The incremental cost of new transmission is determined by the installation of new generation capacity as transmission delivers power to substations before it is then distributed to customers. This incremental cost represents the long-term value of displaced transmission. This amount should be used to calculate the net benefits for net energy metered (NEM) customers who avoid the need for additional transmission investment by providing local resources rather than remote bulk generation when setting rates for rooftop solar in the NEM tariff.
In California, transmission investment additions were collected from the FERC Form 1 filings for 2017 to 2020 for PG&E, SCE and SDG&E. The Wholesale Base Total Revenue Requirements submitted to FERC were collected for the three utilities for the same period. The average fixed charge rate for the Wholesale Base Total Revenue Requirements was 12.1% over that year. That fixed charge rate is applied to the average of the transmission additions to determine the average incremental revenue requirements for new transmission for the period. The plant capacity installed in California for 2017 to 2020 is calculated from the California Energy Commission’s “Annual Generation – Plant Unit”. (This metric is conservative because (1) it includes the entire state while CAISO serves only 80% of the state’s load and the three utilities serve a subset of that, and (2) the list of “new” plants includes a number of repowered natural gas plants at sites with already existing transmission. A more refined analysis would find an even higher incremental transmission cost.)
Based on this analysis, the appropriate marginal transmission cost is $171.17 per kilowatt-year. Applying the average CAISO load factor of 52%, the marginal cost equals $37.54 per megawatt-hour.
In Kentucky, Kentucky Power is owned by American Electric Power (AEP) which operates in the PJM ISO. PJM has a market in financial transmission rights (FTR) that values relieving the congestion on the grid in the short term. AEP files network service rates each year with PJM and FERC. The rate more than doubled over 2018 to 2021 at average annual increase of 26%.
Based on the addition of 22,907 megawatts of generation capacity in PJM over that period, the incremental cost of transmission was $196 per kilowatt-year or nearly four times the current AEP transmission rate. This equates to about $37 per megawatt-hour (or 3.7 cents per kilowatt-hour).
Yet policymakers and stakeholders largely focus almost solely on increasing reserve margins to improve reliability. If we instead looked the most comprehensive means of improving reliability in the manner that matters to customers, we’d probably find that distributed energy resources are a much better fit. To the extent that DERs can relieve distribution level loads, we gain at both levels and not just at the system level with added bulk generation.
This approaches first requires a change in how resource adequacy is defined and modeled to look from the perspective of the customer meter. It will require a more extensive analysis of distribution circuits and the ability of individual circuits to island and self supply during stressful conditions. It also requires a better assessment of the conditions that lead to local outages. Increased resource diversity should lead to improved probability of availability as well. Current modeling of the benefits of regions leaning on each other depend on largely deterministic assumptions about resource availability. Instead we should be using probability distributions about resources and loads to assess overlapping conditions. An important aspect about reliability is that 100 10 MW generators with a 10% probability of outage provides much more reliability than a single 1,000 MW generator also with a 10% outage rate due to diversity. This fact is generally ignored in setting the reserve margins for resource adequacy.
We also should consider shifting resource investment from bulk generation (and storage) where it has a much smaller impact on individual customer reliability to lower voltage distribution. Microgrids are an example of an alternative that better focuses on solving the real problem. Let’s start a fundamental reconsideration of our electric grid investment plan.
One proposed solution to reducing wildfire risk is for PG&E to put its grid underground. There are a number of problems with undergrounding including increased maintenance costs, seismic and flooding risks, and problems with excessive heat (including exploding underground vaults). But ignoring those issues, the costs could be exorbitant-greater than anyone has really considered. An alternative is shifting rural service to microgrids. A high-level estimate shows that using microgrids instead could cost less than 10% of undergrounding the lines in regions at risk. The CPUC is considering a policy shift to promote this type of solution and has new rulemaking on promoting microgrids.
We can put this in context by estimating costs from PG&E’s data provided in its 2020 General Rate Case, and comparing that to its total revenue requirements. That will give us an estimate of the rate increase needed to fund this effort.
PG&E has about 107,000 miles of distribution voltage wires and 18,500 in transmission lines. PG&E listed 25,000 miles of distribution lines being in wildfire risk zones. The the risk is proportionate for transmission this is another 4,300 miles. PG&E has estimated that it would cost $3 million per mile to underground (and ignoring the higher maintenance and replacement costs). And undergrounding transmission can cost as much as $80 million per mile. Using estimates provided to the CAISO and picking the midpoint cost adder of four to ten times for undergrounding, we can estimate $25 million per mile for transmission is reasonable. Based on these estimates it would cost $75 billion to underground distribution and $108 billion for transmission, for a total cost of $183 billion. Using PG&E’s current cost of capital, that translates into annual revenue requirement of $9.1 billion.
PG&E’s overall annual revenue requirement are currently about $14 billion and PG&E has asked for increases that could add another $3 billion. Adding $9.1 billion would add two-thirds (~67%) to PG&E’s overall rates that include both distribution and generation. It would double distribution rates.
This begs two questions:
Is this worth doing to protect properties in the affected urban-wildlands interface (UWI)?
Is there a less expensive option that can achieve the same objective?
On the first question, if we look the assessed property value in the 15 counties most likely to be at risk (which includes substantial amounts of land outside the UWI), the total assessed value is $462 billion. In other words, we would be spending 16% of the value of the property being protected. The annual revenue required would increase property taxed by over 250%, going from 0.77% to 2.0%.
Which turns us to the second question. If we assume that the load share is proportionate to the share of lines at risk, PG&E serves about 18,500 GWh in those areas. The equivalent cost per unit for undergrounding would be $480 per MWh.
The average cost for a microgrid in California based on a 2018 CEC study is $3.5 million per megawatt. That translates to $60 per MWh for a typical load factor. In other words a microgrid could cost one-eighth of undergrounding. The total equivalent cost compared to the undergrounding scenario would be $13 billion. This translates to an 8% increase in PG&E rates.
To what extent should we pursue undergrounding lines versus shifting to microgrid alternatives in the WUI areas? Should we encourage energy independence for these customers if they are on microgrids? How should we share these costs–should locals pay or should they be spread over the entire customer base? Who should own these microgrids: PG&E or CCAs or a local government?
This post accepts too easily the conventional industry “wisdom” that the only valid price signals come from short term responses and effects. In general, storage and demand response is likely to lead to increased renewables investment even if in the short run GHG emissions increase. This post hints at that possibility, but it doesn’t make this point explicitly. (The only exception might be increased viability of baseloaded coal plants in the East, but even then I think that the lower cost of renewables is displacing retiring coal.)
We have two facts about the electric grid system that undermine the validity of short-term electricity market functionality and pricing. First, regulatory imperatives to guarantee system reliability causes new capacity to be built prior to any evidence of capacity or energy shortages in the ISO balancing markets. Second, fossil fueled generation is no longer the incremental new resource in much of the U.S. electricity grid. While the ISO energy markets still rely on fossil fueled generation as the “marginal” bidder, these markets are in fact just transmission balancing markets and not sources for meeting new incremental loads. Most of that incremental load is now being met by renewables with near zero operational costs. Those resources do not directly set the short-term prices. Combined with first shortcoming, the total short term price is substantially below the true marginal costs of new resources.
Storage policy and pricing should be set using long-term values and emission changes based on expected resource additions, not on tomorrow’s energy imbalance market price.