The Energy Institute’s blog has an important premise–that solar rooftop customers have imposed costs on other ratepayers with few benefits. This premise runs counter to the empirical evidence.
First, these customers have deferred an enormous amount of utility-scale generation. In 2005 the CEC forecasted the 2020 CAISO peak load would 58,662 MW. The highest peak after 2006 has been 50,116 MW (in 2017–3,000 MW higher than in August 2020). That’s a savings of 8,546 MW. (Note that residential installations are two-thirds of the distributed solar installations.) The correlation of added distributed solar capacity with that peak reduction is 0.938. Even in 2020, the incremental solar DER was 72% of the peak reduction trend. We can calculate the avoided peak capacity investment from 2006 to today using the CEC’s 2011 Cost of Generation model inputs. Combustion turbines cost $1,366/kW (based on a survey of the 20 installed plants–I managed that survey) and the annual fixed charge rate was 15.3% for a cost of $209/kW-year. The total annual savings is $1.8 billion. The total revenue requirements for the three IOUs plus implied generation costs for DA and CCA LSEs in 2021 was $37 billion. So the annual savings that have accrued to ALL customers is 4.9%. Given that NEM customers are about 4% of the customer base, if those customers paid nothing, everyone else’s bill would only go up by 4% or less than what rooftop solar has saved so far.
In addition, the California Independent System Operator (CAISO) calculated in 2018 that at least $2.6 billion in transmission projects had been deferred through installed distributed solar. Using the amount installed in 2017 of 6,785 MW, the avoided costs are $383/kW or $59/kW-year. This translates to an additional $400 million per year or about 1.1% of utility revenues.
The total savings to customers is over $2.2 billion or about 6% of revenue requirements.
Second, rooftop solar isn’t the most expensive power source. My rooftop system installed in 2017 costs 12.6 cents/kWh (financed separately from our mortgage). In comparison, PG&E’s RPS portfolio cost over 12 cents/kWh in 2019 according to the CPUC’s 2020 Padilla Report, plus there’s an increments transmission cost approaching 4 cents/kWh, so we’re looking at a total delivered cost of 16 cents/kwh for existing renewables. (Note that the system costs to integrate solar are largely the same whether they are utility scale or distributed).
Comparing to the average IOU RPS portfolio cost to that of rooftop solar is appropriate from the perspective of a customer. Utility customers see average, not marginal, costs and average cost pricing is widely prevalent in our economy. To achieve 100% renewable power a reasonable customer will look at average utility costs for the same type of power. We use the same principle by posting on energy efficient appliances the expect bill savings based on utility rates–-not on the marginal resource acquisition costs for the utilities.
And customers who would choose to respond to the marginal cost of new utility power instead will never really see those economic savings because the supposed savings created by that decision will be diffused across all customers. In other words, other customers will extract all of the positive rents created by that choice. We could allow for bypass pricing (which industrial customers get if they threaten to leave the service area) but currently we force other customers to bear the costs of this type of pricing, not shareholders as would occur in other industries. Individual customers are currently the decision making point of view for most energy use purposes and they base those on average cost pricing, so why should we have a single carve out for a special case that is quite similar to energy efficiency?
I wrote more about whether a fixed connection cost is appropriate for NEM customers and the complexity of calculating that charge earlier this week.
A recent post at the Energy Institute at Haas proposed that all residential ratepayers should pay the “solar tax” in the recently withdrawn proposed decision from the California Public Utilities Commission through a connection fee. I agree that charging residential a connection charge is a reasonable solution. (All commercial and agricultural customers in California already pay such a charge.) The more important question though is what that connection fee should be?
Much less of the distribution costs are “fixed” than many proponents understand–we can see an example of the ability to avoid large undergrounding costs by installing microgrids as an example. Southern California Edison has repeatedly asked for a largely fixed “grid charge” for the last dozen years and the intervening ratepayer groups have shown that SCE’s estimate is much too high. A service connection costs about $10-$15/month, not more than $50 per month. So what might be the other elements of a fixed monthly charge rather than collecting these revenues through a volumetric rate as is done today?
A strong economic argument can be made that if the utility is collecting a fixed charge for upstream T&D capacity, then a customer should be able to trade that capacity that they have paid for with other customers. In the face of transaction costs, that market would devolve down to the per kWh price managed by the utility acting as a dealer–just what we have today.
Other candidates abound. How to recover stranded costs really requires a conversation about how much of those costs shareholders should shoulder. Income distributional public purpose costs should be collected from taxes, not rates. Energy efficiency is a resource that should be charged in the generation component, not distribution, and should be treated like other generation resources in cost recovery. The problem is that decoupling which was used to encourage energy efficiency investment has become a backdoor way to recover stranded costs without any conversation about whether that is appropriate–rates go up as demand decreases with little reduction in revenue requirements. So what the connection charge should be becomes quite complex.
There is a general understanding among the most informed participants and observers that California’ net energy metering (NEM) tariff as originally conceived was not intended to be a permanent fixture. The objective of the NEM rate was to get a nascent renewable energy industry off the ground and now California has more than 11,000 megawatts of distributed solar generation. Now that the distributed energy resources industry is in much less of a need for subsidies, but its full value also must be recognized. To this end it is important to understand some key facts that are sometimes overlooked in the debate.
The true underlying reason for high rates–rising utility revenue requirements
In California, retail electricity rates are so high for two reasons, the first being stranded generation costs and the second being a bunch of “public goods charges” that constitute close to half of the distribution cost. PG&E’s rates have risen 57% since 2009. Many, if not most, NEM customers have installed solar panels as one way to avoid these rising rates. The thing is when NEM 1.0 and 2.0 were adopted, the cost of the renewable power purchase agreements (PPA) portfolios were well over $100/MWH—even $120MWH through 2019, and adding in the other T&D costs, this approached the average system rate as late as 2019 for SCE and PG&E before their downward trends reversed course. That the retail rate skyrocketed while renewable PPAs fell dramatically is a subsequent development that too many people have forgotten.
California uses Ramsey pricing principles to allocate these (the CPUC applies “equal percent marginal costs” or EPMC as a derivative measure), but Ramsey pricing was conceived for one-way pricing. I don’t know what Harold Hotelling would think of using his late student’s work for two way transactions. This is probably the fundamental problem in NEM rates—the stranded and public goods costs are incurred by one party on one side of the ledger (the utility) but the other party (the NEM customer) doesn’t have these same cost categories on the other side of the ledger; they might have their own set of costs but they don’t fall into the same categories. So the issue is how to set two way rates given the odd relationships of these costs and between utilities and ratepayers.
This situation argues for setting aside the stranded costs and public goods to be paid for in some manner other than electric rates. The answer can’t be in a form of a shift of consumption charges to a large access charge (e.g., customer charge) because customers will just leave entirely when half of their current bill is rolled into the new access charge.
The largest nonbypassable charge (NBC), now delineated for all customers, is the power cost indifference adjustment (PCIA). The PCIA is the stranded generation asset charge for the portfolio composed of utility-scale generation. Most of this is power purchase agreements (PPAs) signed within the last decade. For PG&E in 2021 according to its 2020 General Rate Case workpapers, this exceeded 4 cents per kilowatt-hour.
Basic facts about the grid
The grid is not a static entity in which there are no changes going forward. Yet the cost of service analysis used in the CPUC’s recent NEM proposed decision assumes that posture. Acknowledging that the system will change going forward depending on our configuration decisions is an important key principle that is continually overlooked in these discussions.
In California, a customer is about 15 times more likely to experience an outage due to distributionsystem problems than from generation/transmission issues. That means that a customer who decides to rely on self-provided resources can have a set up that is 15 times less reliable than the system grid and still have better reliability than conventional service. This is even more true for customers who reside in rural areas.
Upstream of the individual service connection (which costs about $10 per month for residential customers based on testimony I have submitted in all three utilities’ rate cases), customers share distribution grid capacity with other customers. They are not given shares of the grid to buy and sell with other customers—we leave that task to the utilities who act as dealers in that market place, owning the capacity and selling it to customers. If we are going to have fixed charges for customers which essentially allocated a capacity share to each of them, those customers also should be entitled to buy and sell capacity as they need it. The end result will be a marketplace which will price distribution capacity on either a daily $ per kilowatt or cents per kilowatt-hour basis. That system will look just like our current distribution pricing system but with a bunch of unnecessary complexity.
This situation is even more true for transmission. There most certainly is not a fixed share of the transmission grid to be allocated to each customer. Those shares are highly fungible.
What is the objective of utility regulation: just and reasonable rates or revenue assurance?
At the core of this issue is the question of whether utility shareholders are entitled to largely guaranteed revenues to recover their investments. In a market with some level of competitiveness, the producers face a degree of risk under normal functional conditions (more mundane than wildfire risk)—that is not the case with electric utilities, at least in California. (We cataloged the amount of disallowances for California IOUs in the 2020 cost of capital applications and it was less than one one-hundredth of a percent (0.01%) of revenues over the last decade.) When customers reduce or change their consumption patterns in a manner that reduces sales in a normal market, other customers are not required to pick up the slack—shareholders are. This risk is one of the core benefits of a competitive market, no matter what the degree of imperfection. Neither the utilities or the generators who sell to them under contract face these risks.
Why should we bother with “efficient” pricing if we are pushing the entire burden of achieving that efficiency on customers who have little ability to alter utilities’ investment decisions? Bottom line: if economists argue for “efficient” pricing, they need to also include in that how utility shareholders will participate directly in the outcomes of that efficient pricing without simply shifting revenue requirements to other customers.
As to the intent of the utilities, in my 30 year on the ground experience, the management does not make decisions that are based on “doing good” that go against their profit objective. There are examples of each utility choosing to gain profits that they were not entitled to. We entered into testimony in PG&E’s 1999 GRC a speech by a PG&E CEO talking about how PG&E would exploit the transition period during restructuring to maintain market share. That came back to haunt the state as it set up the conditions for ensuing market manipulation.
Each of these issues have been largely ignored in the debate over what to do about solar rooftop policy and investment going forward. It is time to push these to fore.
I read these two statements in his blog post and come to a very different conclusions:
“(I)ndividuals and businesses make investments in response to those policies, and many come to believe that they have a right to see those policies continue indefinitely.”
Why wasn’t there a similar cry against bailing out PG&E in not one but TWO bankruptcies? Both PG&E and SCE have clearly relied on the belief that they deserve subsidies to continue staying in business. (SCE has ridden along behind PG&E in both cases to gain the spoils.) The focus needs to be on ALL players here if these types of subsidies are to be called out.
“(T)he reactions have largely been about how much subsidy rooftop solar companies in California need in order to stay in business.”
We are monitoring two very different sets of media then. I see much more about the ability of consumers to maintain an ability to gain a modicum of energy independence from large monopolies that compel that those consumers buy their service with no viable escape. I also see a reactions about how this will undermine directly our ability to reduce GHG emissions. This directly conflicts with the CEC’s Title 24 building standards that use rooftop solar to achieve net zero energy and electrification in new homes.
Yes, there are problems with the current compensation model for NEM customers, but we also need to recognize our commitments to customers who made investments believing they were doing the right thing. We need to acknowledge the savings that they created for all of us and the push they gave to lower technology costs. We need to recognize the full set of values that these customers provide and how the current electric market structure is too broken to properly compensate what we want customers to do next–to add more storage. Yet, the real first step is to start at the source of the problem–out of control utility costs that ratepayers are forced to bear entirely.
Last month the California Public Utilities Commission (CPUC) issued a decision in Phase II of the PG&E 2020 General Rate Case that endorsed all but one of my proposals on behalf of the Agricultural Energy Consumers Association (AECA) to better align revenue allocation with a rational approach to using marginal costs. Most importantly the CPUC agreed with my observation that the energy system is changing too rapidly to adopt a permanent set of rate setting principles as PG&E had advocated for. For now, we will continue to explore options as relationships among customers, utilities and other providers evolve.
At the heart of the matter is the economic principle that prices are set most efficiently when they adhere to the marginal cost or the cost of producing the last unit of a good or service. In a “standard” market, marginal costs are usually higher than the average cost so a producing firm generates a profit with each sale. For utilities, this is often not true–the average costs are higher than the marginal costs, so we need a means of allocating those additional costs to ensure that the utilities continue to be viable entities. California uses a “second-best” economic method called “Ramsey pricing” that applies relative marginal costs to serve different customers to allocate revenue responsibility.
I made four key proposals on how to apply marginal cost principles for rate setting purposes:
Proposes an updated agricultural load forecasting method that is more accurate and incorporates only public data and currently known variables that can predict next year’s load more accurately.
Use PCIA exit fee market price benchmarks (MPBs) to give consistent revenue allocation across rate classes and bundled vs departed customers.
Include renewable energy credits (REC) in the marginal energy costs (MEC) to reflect incremental RPS acquisition and consistency with the PCIA MPB.
Use the resource adequacy (RA) MPB for setting the marginal generation capacity cost (MGCC) due to uncertainty about resource type for capacity and for consistency with the PCIA MPB.
Marginal customer access costs (MCAC) should be calculated by using the depreciated replacement cost for existing services (RCNLD), and new services costs added for the new customers added as growth.
PG&E settled with AECA on the first to change its agricultural load forecasting methodology in upcoming proceedings. The CPUC agreed with AECA’s positions on two of the other three (RECs in the MEC, and MCAC). And on the third related to MGCC, the adopted position differed little materially.
The most surprising was the choice to use the RCNLD costs for existing customer connections. The debate over how to calculate the MCAC has raged for three decades. Industrial customers preferred valuing all connections, new and existing, at the cost of new connection using the “real economic carrying cost” (RECC) method. This is most consistent with a simple reading of marginal cost pricing principles. On the other side, residential customer advocates claimed that existing connections were sunk costs and have a value of zero for determining marginal, inventing the “new customer only” (NCO) method. I explained in my testimony that the RECC method fails to account for the reduced value of aging connections, but that those connections have value in the market place through house prices, just as a swimming pool or a bathroom remodel adds value. The diminished value of those connections can be approximated using the depreciation schedules that PG&E applies to determine its capital-related revenue requirements. The CPUC has used the RCNLD method to set the value for the sale of PG&E assets to municipal utilities.
The CPUC agreed with this approach which essentially is a compromise between the RECC and NCO method. The RCNLD acknowledges the fundamental points of both methods–that existing customer connections represent an opportunity value for customers but those connections do not have the same value as new ones.
The saying goes “No good deed goes unpunished.” The California Public Utilities Commission seems to have taken that motto to heart recently, and stands ready to penalize yet another group of customers who answered the clarion call to help solve the state’s problems by radically altering the rules for solar rooftops. Here’s three case studies of recent CPUC actions that undermine incentives for customers to act in the future in response to state initiatives: (1) farmers who invested in response to price incentives, (2) communities that pursued renewables more assertively, and (3) customers who installed solar panels.
Agriculture: Farmers have responded to past time of use (TOU) rate incentives more consistently and enthusiastically than any other customer class. Instead of being rewarded for their consistency, their peak price periods shifted from the afternoon to the early evening. Growers face much more difficulty in avoiding pumping during that latter period.
Since TOU rates were introduced to agricultural customers in the late 1970s, growers have made significant operational changes in response to TOU differentials between peak and off-peak energy prices to minimize their on-peak consumption. These include significant investments in irrigation equipment, storage and conveyance infrastructure and labor deployment rescheduling. The results of these expenditures are illustrated in the figure below, which shows how agricultural loads compare with system-wide load on a peak summer weekday in 2015, contrasting hourly loads to the load at the coincident peak hour. Both the smaller and larger agricultural accounts perform better than a range of representative rate schedules. Most notably agriculture’s aggregate load shape on a summer weekday is inverted relative to system peak, i.e., the highest agricultural loads occur during the lowest system load periods, in contrast with other rate classes.
All other rate schedules shown in the graphic hit their annual peak on the same peak day within the then-applicable peak hours of noon to 6 p.m. In contrast, agriculture electricity demand is less than 80% of its annual peak during those high-load hours, with its daily peak falling outside the peak period. Agriculture’s avoidance of peak hours occurred during the summer agricultural growing season, which coincided with peak system demand—just as the Commission asked customers to do. The Commission could not ask for a better aggregate response to system needs; in contrast to the profiles for all of the other customer groups, agriculture has significantly contributed to shifting the peak to a lower cost evening period.
The significant changes in the peak period price timing and differential that the CPUC adopted increases uncertainty over whether large investments in high water-use efficiency microdrip systems – which typically cost $2,000 per acre–will be financially viable. Microdrip systems have been adopted widely by growers over the last several years—one recent study of tomato irrigation rates in Fresno County could not find any significant quantity of other types of irrigation systems. Such systems can be subject to blockages and leaks that are only detectable at start up in daylight. Growers were able to start overnight irrigation at 6 p.m. under the legacy TOU periods and avoid peak energy use. In addition, workers are able to end their day shortly after 6 p.m. and avoid nighttime accidents. Shifting that load out of the peak period will be much more difficult to do with the peak period ending after sunset.
Contrary to strong Commission direction to incent customers to avoid peak power usage, the shift in TOU periods has served to penalize, and reverse, the great strides the agricultural class has made benefiting the utility system over the last four decades.
Community choice aggregators: CCAs were created, among other reasons, to develop more renewable or “green” power. The state achieved its 2020 target of 33% in large part because of the efforts of CCAs fostered through offerings of 50% and 100% green power to retail customers. CCAs also have offered a range of innovative programs that go beyond the offerings of PG&E, SCE and SDG&E.
Nevertheless, the difficulty of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.
The portfolios of the incumbent utilities are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)
The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.
But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.
Solar net energy metered customers: Distributed solar generation installed under California’s net energy metering (NEM/NEMA) programs has mitigated and even eliminated load and demand growth in areas with established customers. This benefit supports protecting the investments that have been made by existing NEM/NEMA customers. Similarly, NEM/NEMA customers can displace investment in distribution assets. That distribution planners are not considering this impact appropriately is not an excuse for failing to value this benefit. For example, PG&E’s sales fell by 5% from 2010 to 2018 and other utilities had similar declines. Peak loads in the CAISO balancing authority reach their highest point in 2006 and the peak in August 2020 was 6% below that level.
Much of that decrease appears to have been driven by the installation of rooftop solar. The figure above illustrates the trends in CAISO peak loads in the set of top lines and the relationship to added NEM/NEMA installations in the lower corner. It also shows the CEC’s forecast from its 2005 Integrated Energy Policy Report as the top line. Prior to 2006, the CAISO peak was growing at annual rate of 0.97%; after 2006, peak loads have declined at a 0.28% trend. Over the same period, solar NEM capacity grew by over 9,200 megawatts. The correlation factor or “R-squared” between the decline in peak load after 2006 and the incremental NEM additions is 0.93, with 1.0 being perfect correlation. Based on these calculations, NEM capacity has deferred 6,500 megawatts of capacity additions over this period. Comparing the “extreme” 2020 peak to the average conditions load forecast from 2005, the load reduction is over 11,500 megawatts. The obvious conclusion is that these investments by NEM customers have saved all ratepayers both reliability and energy costs while delivering zero-carbon energy.
The CPUC now has before it a rulemaking in which the utilities and some ratepayer advocates are proposing to not only radically reduce the compensation to new NEM/NEMA customers but also to change the terms of the agreements for existing ones.
One of the key principles of providing financial stability is setting prices and rates for long-lived assets such as solar panels and generation plants at the economic value when the investment decision was made to reflect the full value of the assets that would have been acquired otherwise. If that new resource had not been built, either a ratebased generation asset would have been constructed by the utility at a cost that would have been recovered over a standard 30-year period or more likely, additional PPAs would have been signed. Additionally, the utilities’ investments and procurement costs are not subject to retroactive ratemaking under the rule prohibiting such ratemaking and Public Utilities Code Section 728, thus protecting shareholders from any risk of future changes in state or Commission policies.
Utility customers who similarly invest in generation should be afforded at least the same assurances as the utilities with respect to protection from future Commission decisions that may diminish the value of those investments. Moreover, customers do not have the additional assurances of achieving a certain net income so they already face higher risks than utility shareholders for their investments.
Generators are almost universally afforded the ability to recover capital investments based on prices set for multiple years, and often the economic life of their assets. Utilities are able to put investments in ratebase to be recovered at a fixed rate of return plus depreciation over several decades. Third-party generators are able to sign fixed price contracts for 10, 20, and even 40 years. Some merchant generators may choose to sell only into the short-term “hourly” market, but those plants are not committed to selling whenever the CAISO demands so. Generators are only required to do so when they sign a PPA with an assured payment toward investment recovery.
Ratepayers who make investments that benefit all ratepayers over the long term should be offered tariffs that provide a reasonable assurance of recovery of those investments, similar to the PPAs offered to generators. Ratepayers should be able to gain the same assurances as generators who sign long-term PPAs, or even utilities that ratebase their generation assets, that they will not be forced to bear all of the risk of investing of clean self-generation. These ratepayers should have some assurance over the 20-plus year expected life of their generation investment.
The debate over whether to close Diablo Canyon has resurfaced. The California Public Utilities Commission, which support from the Legislature, decided in 2018 to close Diablo by 2025 rather than proceed to relicensing. PG&E applied in 2016 to retire the plant rather than relicense due to the high costs that would make the energy uneconomic. (I advised the Joint CCAs in this proceeding.)
Now a new study from MIT and Stanford finds potential savings and emission reductions from continuing operation. (MIT in particular has been an advocate for greater use of nuclear power.) Others have written opinionarticles on either side of the issue. I wrote the article below in the Davis Enterprise addressing this issue. (It was limited to 900 words so I couldn’t cover everything.)
IT’S OK TO CLOSE DIABLO CANYON NUCLEAR PLANT A previous column (by John Mott-Smith) asked whether shutting down the Diablo Canyon nuclear plant is risky business if we don’t know what will replace the electricity it produces. John’s friend Richard McCann offered to answer his question. This is a guest column, written by Richard, a universally respected expert on energy, water and environmental economics.
John Mott-Smith asked several questions about the future of nuclear power and the upcoming closure of PG&E’s Diablo Canyon Power Plant in 2025. His main question is how are we going to produce enough reliable power for our economy’s shift to electricity for cars and heating. The answers are apparent, but they have been hidden for a variety of reasons. I’ve worked on electricity and transportation issues for more than three decades. I began my career evaluating whether to close Sacramento Municipal Utility District’s Rancho Seco Nuclear Generating Station and recently assessed the cost to relicense and continue operations of Diablo after 2025. Looking first at Diablo Canyon, the question turns almost entirely on economics and cost. When the San Onofre Nuclear Generating Station closed suddenly in 2012, greenhouse gas emissions rose statewide the next year, but then continued a steady downward trend. We will again have time to replace Diablo with renewables. Some groups focus on the risk of radiation contamination, but that was not a consideration for Diablo’s closure. Instead, it was the cost of compliance with water quality regulations. The power plant currently uses ocean water for cooling. State regulations required changing to a less impactful method that would have cost several billion dollars to install and would have increased operating costs. PG&E’s application to retire the plant showed the costs going forward would be at least 10 to 12 cents per kilowatt-hour. In contrast, solar and wind power can be purchased for 2 to 10 cents per kilowatt-hour depending on configuration and power transmission. Even if new power transmission costs 4 cents per kilowatt-hour and energy storage adds another 3 cents, solar and wind units cost about 3 cents, which totals at the low end of the cost for Diablo Canyon. What’s even more exciting is the potential for “distributed” energy resources, where generation and power management occurs locally, even right on the customers’ premises rather than centrally at a power plant. Rooftop solar panels are just one example—we may be able to store renewable power practically for free in our cars and trucks. Automobiles are parked 95% of the time, which means that an electric vehicle (EV) could store solar power at home or work during the day and for use at night. When we get to a vehicle fleet that is 100% EVs, we will have more than 30 times the power capacity that we need today. This means that any individual car likely will only have to use 10% of its battery capacity to power a house, leaving plenty for driving the next day. With these opportunities, rooftop and community power projects cost 6 to 10 cents per kilowatt-hour compared with Diablo’s future costs of 10 to 12 cents. Distributed resources add an important local protection as well. These resources can improve reliability and resilience in the face of increasing hazards created by climate change. Disruptions in the distribution wires are the cause of more than 95% of customer outages. With local generation, storage, and demand management, many of those outages can be avoided, and electricity generated in our own neighborhoods can power our houses during extreme events. The ad that ran during the Olympics for Ford’s F-150 Lightning pick-up illustrates this potential. Opposition to this new paradigm comes mainly from those with strong economic interests in maintaining the status quo reliance on large centrally located generation. Those interests are the existing utilities, owners, and builders of those large plants plus the utility labor unions. Unfortunately, their policy choices to-date have led to extremely high rates and necessitate even higher rates in the future. PG&E is proposing to increase its rates by another third by 2024 and plans more down the line. PG&E’s past mistakes, including Diablo Canyon, are shown in the “PCIA” exit fee that [CCA] customers pay—it is currently 20% of the rate. Yolo County created VCEA to think and manage differently than PG&E. There may be room for nuclear generation in the future, but the industry has a poor record. While the cost per kilowatt-hour has gone down for almost all technologies, even fossil-fueled combustion turbines, that is not true for nuclear energy. Several large engineering firms have gone bankrupt due to cost overruns. The global average cost has risen to over 10 cents per kilowatt-hour. Small modular reactors (SMR) may solve this problem, but we have been promised these are just around the corner for two decades now. No SMR is in operation yet. Another problem is management of radioactive waste disposal and storage over the course of decades, or even millennia. Further, reactors fail on a periodic basis and the cleanup costs are enormous. The Fukuyama accident cost Japan $300 to $750 billion. No other energy technology presents such a degree of catastrophic failure. This liability needs to be addressed head on and not ignored or dismissed if the technology is to be pursued.
The California Water Commission staff asked a group of informed stakeholders and experts about “how to shape well-managed groundwater trading programs with appropriate safeguards for communities, ecosystems, and farms.” I submitted the following essay in response to a set of questions.
In general, setting up functioning and fair markets is a more complex process than many proponents envision. Due to the special characteristics of water that make location particularly important, water markets are likely to be even more complex, and this will require more thinking to address in a way that doesn’t stifle the power of markets.
Anticipation of Performance
Market power is a concern in many markets. What opportunities or problems could market power create for overall market performance or for safeguarding? How is it likely to manifest in groundwater trading programs in California?
I was an expert witness on behalf of the California Parties in the FERC Energy Crisis proceeding in 2003 after the collapse of California’s electricity market in 2000-2001. That initial market arrangement failed for several reasons that included both exploitations of traits of internal market functions and limitations on outside transactions that enhanced market power. An important requirement that can mitigate market power is the ability to sign long-term agreements that then reduces the amount of resources that are open to market manipulation. Clear definitions of resource accounting used in transactions is a second important element. And lowering transaction costs and increasing liquidity are a third element. Note that confidentiality has not prevented market gaming in electricity markets.
Groundwater provides a fairly frequent opportunity for exploitation of market power with recurrence of dry and drought conditions. The analogy for electricity is during peak load conditions. Prices in the Texas ERCOT market went up 30,000 fold last February during such a shortage. Droughts in California happen more frequently than freezes in Texas.
The other dimension is that often a GSA has a concentration of a small number of property owners. This small concentration eases the ability to manipulate prices even if buyers and sellers are anonymous. This situation is what led to the crisis in the CAISO market. (I was able beforehand to calculate the minimum generation capacity ownership required to profitably manipulate prices, and it was an amount held by many of the merchant generators in the market.) Those larger owners are also the ones most likely to have the resources to participate in certain types of market designs due to higher transaction costs that act as barriers.
2. Given a configuration of market rules, how well can impacts to communities, the environment, and small farmers be predicted?
The impacts can be fairly well assessed with sufficient modeling with inclusion of three important pieces of information. The first is a completely structured market design that can be tested and modeled. The second is a relatively accurate assessment of the costs of individuals entities to participate in such a market. And the third is modelling the variation in groundwater depth to assess the likelihood of those swings exceeding current well depths for these groups.
Safeguards
3. What rules are needed to safeguard these water users? If not through market mechanisms directly, how could or should these users be protected?
These groups should not participate in shorter term groundwater trading markets such as for annual allocations unless they proactively elect to do so. They are unlikely to have the resources to participate in an usefully informed way. Instead, the GSAs should carve allocations out of the sustainable yields that are then distributed in any number of methods that include bidding for long run allocations as well as direct allowances.
For tenant farmers, restrictions on landlords’ participation in short-term markets should be implemented. This can be specified either through quantity limits, long term contracting requirements or time windows for guaranteed supplies to tenants that match with lease terms.
4. What other kinds of oversight, monitoring, and evaluation of markets are needed to safeguard? Who should perform these functions?
These markets will likely require oversight to prevent market manipulation. Instituting market monitors akin to those who now oversee the CAISO electricity and the CARB GHG Allowance auctions is potential approach. The state would most likely be the appropriate institution to provide this service. The functions for those monitors are well delineated by those other agencies. The single most important requirement for this function is a clear authority and willingness to enforce meaningful actions as a consequence of violations.
5. Groundwater trading programs could impact markets for agricultural commodities, land, labor, or more. To what degree could the safeguards offered by groundwater trading programs be undermined through the programs’ interactions with other markets? How should other markets be considered?
These interactions among different markets are called pecuniary externalities, and economists consider these as intended consequences of using market mechanisms to change behavior and investments across markets. For example, establishing prices for groundwater most likely will change both cropping decisions and irrigation practices, which in turn will impact both equipment and service dealers and labor. Safeguards must be established in ways that do not directly affect these impacts—to do otherwise defeats the very purpose of setting up markets in the first place. People will be required to change from their current practices and choices as a result of instituting these markets.
Mitigation of adverse consequences should account for catastrophic social outcomes to individuals and businesses that are truly outside of their control. SGMA, and associated groundwater markets, are intended to create economic benefits for the larger community. A piece often missing from the social benefit-cost assessment that leads to the adoption of these programs is compensation to those who lose economically from the change. For example, conversion from a labor intensive crop to a less water intensive one could reduce farm labor demand. Those workers should be paid compensation from a public pool of beneficiaries.
6. Should safeguarding take common forms across all of the groundwater trading programs that may form in California? To the degree you think it would help, what level of detail should a common framework specify?
Localities generally do not have either the resources, expertise or sufficient incentives to manage these types of safeguards. Further the safeguards should be relatively uniform across the region to avoid creating inadvertent market manipulation opportunities among different groundwater markets. (That was one of the means of exploiting CAISO market in 2000-01.) The level of detail will depend on other factors that can be identified after potential market structures are developed and a deeper understanding is prepared.
7. Could transactions occurring outside of a basin or sub-basin’s groundwater trading program make it harder to safeguard? If so, what should be done to address this?
The most important consideration is the interconnection with surface water supplies and markets. Varying access to surface water will affect the relative ability to manipulate market supplies and prices. The emergence of the NASDAQ Veles water futures market presents another opportunity to game these markets.
Among the most notorious market manipulation techniques used by Enron during the Energy Crisis was one called “Ricochet” that involved sending a trade out of state and then returning down a different transmission line to create increased “congestion.” Natural gas market prices were also manipulated to impact electricity prices during the period. (Even the SCAQMD RECLAIM market may have been manipulated.) It is possible to imagine a similar series of trades among groundwater and surface water markets. It is not always possible to identify these types of opportunities and prepare mitigation until a full market design is prepared—they are particular to situations and general rules are not easily specified.
Performance Indicators and Adaptive Management
8. Some argue that market rules can be adjusted in response to evidence a market design did not safeguard. What should the rules for changing the rules be?
In general, changing the rules for short term markets, e.g., trading annual allocations, should be relatively easy. Investors should not be allowed to profit from market design flaws no matter how much they have spent. Changes must be carefully considered but they also should not be easily impeded by those who are exploiting those flaws, as was the case in fall of 2000 for California’s electricity market.
Pacific Gas & Electric has proposed to underground 10,000 miles of distribution lines to reduce wildfire risk, at an estimated cost of $1.5 to $2 million per mile. Meanwhile PG&E has installed fast-trip circuit breakers in certain regions to mitigate fire risks from line shorts and breaks, but it has resulted in a vast increase in customer outages. CPUC President Batjer wrote in an October 25 letter to PG&E, “[s]ince PG&E initiated the Fast Trip setting practice on 11,500 miles of lines in High Fire Threat Districts in late July, it has caused over 500 unplanned power outages impacting over 560,000 customers.” She then ordered a series of compliance reports and steps. The question is whether undergrounding is the most cost-effective solution that can be implemented in a timely manner.
A viable alternative is microgrids, installed at either individual customers or community scale. The microgrids could be operated to island customers or communities during high risk periods or to provide backup when circuit breakers cut power. Customers could continue to be served outside of either those periods of risk or weather-caused outages.
Because microgrids would be installed solely for the purpose of displacing undergrounding, the relative costs should be compared without considering any other services such as energy delivered outside of periods of fire risk or outages or increased green power.
I previously analyzed this question, but this updated assessment uses new data and presents a threshold at which either undergrounding or microgrids is preferred depending on the range of relative costs.
We start with the estimates of undergrounding costs. Along with PG&E’s stated estimate, PG&E’s 2020 General Rate Case includes a settlement agreement with a cost of $4.8 million per mile. That leads to an estimate of $15 to $48 million. Adding in maintenance costs of about $400 million annually, this revenue requirement translates to a rate increase of 3.2 to 9.3 cents per kilowatt-hour.
For microgrid costs, the National Renewable Energy Laboratory published estimated ranges for both (1) commercial or community scale projects of 1 megawatt with 2.4 megawatt-hours of storage and (2) residential scale of 7 kilowatts with 20 kilowatt-hours of storage. For larger projects, NREL shows ranges of $2.07 to $2.13 million; we include an upper end estimate double of NREL’s top range. For residential; the range is $36,000 to $38,000.
Using this information, we can make comparisons based on the density of customers or energy use per mile of targeted distribution lines. In other words, we can determine if its more cost-effective to underground distribution lines or install microgrids based on how many customers or how much load is being served on a line.
As a benchmark, PG&E’s average system density per mile of distribution line is 50.6 customers and 166 kW (or 0.166 MW).
The table below shows the relative cost effectiveness for undergrounding compared to community/commercial microgrids. If the load density falls below the value shown, microgrids are more cost effective. Note that the average density across the PG&E service area is 0.166 MW which is below any of the thresholds. That indicates that such microgrids should be cost-effective in most rural areas.
The next table shows the relative cost effectiveness for individual residential microgrids, and again if the customer density falls below the threshold shown, then microgrids save more costs. The average density for service area is 51 customers per line-mile which reflects the concentration of population in the Bay Area. At the highest undergrounding costs, microgrids are almost universally favored. In rural areas where density falls below 30 customers per line-mile, microgrids are less costly at the lower undergrounding costs.
PG&E has installed two community-scalemicrogrids in remote locations so far, and reportedly considering 20 such projects. However, PG&E fell behind on those projects, prompting the CPUC to reopen its procurement process in its Emergency Reliability rulemaking. In addition, PG&E has relied heavily on natural gas generation for these.
PG&E simply may not have the capacity to construct either microgrids or install undergrounded lines in a timely manner solely through its organization. PG&E already is struggling to meet its targets for converting privately-owned mobilehome park utility systems to utility ownership. A likely better choice is to rely on local governments working in partnership with PG&E to identify the most vulnerable lines to construct and manage these microgrids. The residential microgrids would be operated remotely. The community microgrids could be run under several different models including either PG&E or municipal ownership.
Governor Gavin Newsom called for a voluntary reduction in water use of 15% in July in response to the second year of a severe drought. The latest data from the State Water Resources Control Board showed little response on the part of the citizenry and the media lamented the lack of effort. However, those reports overlooked a major reason for a lack of further conservation.
The SWRCB conservation reports data shows that urban Californians are still saving 15% below the 2013 benchmark used in the last drought. So a call for another 15% on top of that translates to a 27% reduction from the same 2013 baseline. Californian’s have not heard that this drought is worse than 2015 yet the state is calling for a more drastic overall reduction. Of course we aren’t seeing an even further reduction without a much stronger message.
In 2015 to get to a 25% reduction, the SWRCB adopted a set of regulations with concomitant penalties which pretty much achieved the intended target. But that effort required a combination of higher rates and increased expenditures by water agencies. It will take a similar effort to move the needle again.