Category Archives: Energy innovation

Emerging technologies and institutional change to meet new challenges while satisfying consumer tastes

Retail electricity rate reform will not solve California’s problems

Meredith Fowlie wrote this blog on the proposal to drastically increase California utilities’ residential fixed charges at the Energy Institute at Haas blog. I posted this comment (with some additions and edits) in response.

First, infrastructure costs are responsive to changes in both demand and added generation. It’s just that those costs won’t change for a customer tomorrow–it will take a decade. Given how fast transmission retail rates have risen and have none of the added fixed costs listed here, the marginal cost must be substantially above the current average retail rates of 4 to 8 cents/kWh.

Further, if a customer is being charged a fixed cost for capacity that is being shared with other customers, e.g., distribution and transmission wires, they should be able to sell that capacity to other customers on a periodic basis. While many economists love auctions, the mechanism with the lowest ancillary transaction costs is a dealer market akin a grocery store which buys stocks of goods and then resells. (The New York Stock Exchange is a type of dealer market.) The most likely unit of sale would be in cents per kWh which is the same as today. In this case, the utility would be the dealer, just as today. So we are already in the same situation.

Airlines are another equally capital intensive industry. Yet no one pays a significant fixed charge (there are some membership clubs) and then just a small incremental charge for fuel and cocktails. Fares are based on a representative long run marginal cost of acquiring and maintaining the fleet. Airlines maintain a network just as utilities. Economies of scale matter in building an airline. The only difference is that utilities are able to monopolistically capture their customers and then appeal to state-sponsored regulators to impose prices.

Why are California’s utility rates 30 to 50% or more above the current direct costs of serving customers? The IOUs, and PG&E in particular, over procured renewables in the 2010-2012 period at exorbitant prices (averaging $120/MWH) in part in an attempt to block entry of CCAs. That squandered the opportunity to gain the economics benefits from learning by doing that led to the rapid decline in solar and wind prices over the next decade. In addition, PG&E refused to sell a part of its renewable PPAs to the new CCAs as they started up in the 2014-2017 period. On top of that, PG&E ratepayers paid an additional 50% on an already expensive Diablo Canyon due to the terms of the 1996 Settlement Agreement. (I made the calculations during that case for a client.) And on the T&D side, I pointed out beginning in 2010 that the utilities were overforecasting load growth and their recorded data showed stagnant loads. The peak load from 2006 was the record until 2022 and energy loads have remained largely constant, even declining over the period. The utilities finally started listening the last couple of years but all of that unneeded capital is baked into rates. All of these factors point not to the state or even the CPUC (except as an inept monitor) as being at fault, but rather to the utilities’ mismanagement.

Using Southern California Edison’s (SCE) own numbers, we can illustrate the point. SCE’s total bundled marginal costs in its rate filing are 10.50 cents per kWh for the system and 13.64 cents per kWh for residential customers. In comparison, SCE’s average system rate is 17.62 cents per kWh or 68% higher than the bundled marginal cost, and the average residential rate of 22.44 cents per kWh is 65% higher. From SCE’s workpapers, these cost increases come primarily from four sources.

  1. First, about 10% goes towards various public purpose programs that fund a variety of state-initiated policies such as energy efficiency and research. Much of this should be largely funded out of the state’s General Fund as income distribution through the CARE rate instead. And remember that low income customers are already receiving a 35% discount on rates.
  2. Next, another 10% comes roughly from costs created two decades ago in the wake of the restructuring debacle. The state has now decreed that this revenue stream will instead be used to pay for the damages that utilities have caused with wildfires. Importantly, note that wildfire costs of any kind have not actually reached rates yet. In addition, there are several solutions much less costly than the undergrounding proposed by PG&E and SDG&E, including remote rural microgrids.
  3. Approximately 15% is from higher distribution costs, some of which have been created by over-forecasting load growth over the last 15 years; loads have remained stagnant since 2006.
  4. And finally, around 33% is excessive generation costs caused by paying too much for purchased power agreements signed a decade ago.

An issue raised as rooftop solar spreads farther is the claim that rooftop solar customers are not paying their fair share and instead are imposing costs on other customers, who on average have lower incomes than those with rooftop solar. Yet the math behind the true rate burden for other customers is quite straightforward—if 10% of the customers are paying essentially zero (which they are actually not), the costs for the remaining 90% of the customers cannot go up more than 11% [100%/(100%-10%) = 11% ]. If low-income customers pay only 70% of this—the 11%– then their bills might go up about 8%–hardly a “substantial burden.” (70% x 11% = 7.7%)

As for aligning incentives for electrification, we proposed a more direct alternative on behalf of the Local Government Sustainable Energy Coalition where those who replace a gas appliance or furnace with an electric receive an allowance (much like the all-electric baseline) priced at marginal cost while the remainder is priced at the higher fully-loaded rate. That would reduce the incentive to exit the grid when electrifying while still rewarding those who made past energy efficiency and load reduction investments.

The solution to high rates cannot come from simple rate design; as Old Surfer Dude points out, wealthy customers are just going to exit the grid and self provide. Rate design is just rearranging the deck chairs. The CPUC tried the same thing in the late 1990s with telcom on the assumption that customers would stay put. Instead customers migrated to cell phones and dropped their land lines. The real solution is going to require some good old fashion capitalism with shareholders and associated stakeholders absorbing the costs of their mistakes and greed.

Obstacles to nuclear power, but how much do we really need it?

Jonathan Rauch writes in the Atlantic Monthly about the innovations in nuclear power technology that might overcome its troubled history. He correctly identifies the core of the problem for nuclear power, although it extends even further than he acknowledges. Recent revelations about the fragility of France’s once-vaunted nuclear fleet illustrates deeper management problems with the technology. Unfortunately he is too dismissive of the safety issues and even the hazardous duties that recovery crews experienced at both Chernobyl and Fukushima. Both of those accidents cost those nations hundreds of billions of dollars. As a result of these issues, nuclear power around the world now costs over 10 cents per kilowatt-hour. Grid-scale solar and wind power in contrast costs less than four cents and even adding storage no more than doubles that cost. And this ignores the competition of small-scale distributed energy resources (DER) that could break the utility monopoly required to pay for nuclear power.

Yet Rauch’s biggest error is in asserting without sufficient evidence that nuclear power is required to achieve greenhouse gas emission reductions. Numerous studies (including for California) show that we can get to a 90% emission free and beyond power grid with current technologies and no nuclear. We have two decades to figure out how to get to the last 10% or less, or to determine if we even need to.

The problem with the new nuclear technologies such as small modular reactors (SMR) is that they must be built on a wide scale as a high proportion of the power supply to achieve the technological cost reductions of the type that we have seen for solar and batteries. And to get a low enough cost per kilowatt-hour, those units must run constantly in baseload mode, which only exacerbates the variable output issue for renewables instead of solving it. Running in a load following mode will increase the cost per kilowatt-hour by 50%.

We should continue research in this technology because there may be a breakthrough that solves these dilemmas. But we should not plan on needing it to save our future. We have been disappointed too many times already by empty promises from this industry.

Paradigm change: building out the grid with renewables requires a different perspective

Several observers have asserted that we will require baseload generation, probably nuclear, to decarbonize the power grid. Their claim is that renewable generation isn’t reliable enough and too distant from load centers to power an electrified economy.

Problem is that this perspective relies on a conventional approach to understanding and planning for future power needs. That conventional approach generally planned to meet the highest peak loads of the year with a small margin and then used the excess capacity to produce the energy needed in the remainder of the hours. This premise was based on using consumable fuel to store energy for use in hours when electricity was needed.

Renewables such as solar and wind present a different paradigm. Renewables capture and convert energy to electricity as it becomes available. The next step is to stored that energy using technologies such as batteries. That means that the system needs to be built to meet energy requirements, not peak loads.

Hydropower-dominated systems have already been built in this manner. The Pacific Northwest’s complex on the Columbia River and its branches for half a century had so much excess peak capacity that it could meet much of California’s summer demand. Meeting energy loads during drought years was the challenge. The Columbia River system could store up to 40% of the annual runoff in its reservoirs to assure sufficient supply.

For solar and wind, we will build enough capacity that is multiples of the annual peak load so that we can generate enough energy to meet those loads that occur when the sun isn’t shining and wind isn’t blowing. For example in a system relying on solar power, the typical demand load factor is 60%, i.e., the average load is 60% of the peak or maximum load. A typical solar photovoltaic capacity factor is 20%, i.e., it generates an average output that is 20% of the peak output. In this example system, the required solar capacity would be three times the peak demand on the system to produce sufficient stored electricity. The amount of storage capacity would equal the peak demand (plus a small reserve margin) less the amount of expected renewable generation during the peak hour.

As a result, comparing the total amount of generation capacity installed to the peak demand becomes irrelevant. Instead we first plan for total energy need and then size the storage output to meet the peak demand. (And that storage may be virtually free as it is embodied in our EVs.) This turns the conventional planning paradigm on its head.

In the LA Times – looking for alternative solutions to storm outages

I was interviewed by a Los Angeles Times reporter about the recent power outages in Northern California as result of the wave of storms. Our power went out for 48 hours New Year’s Eve and again for 12 hours the next weekend:

After three days without power during this latest storm series, Davis resident Richard McCann said he’s seriously considering implementing his own microgrid so he doesn’t have to rely on PG&E.

“I’ve been thinking about it,” he said. McCann, whose work focuses on power sector analysis, said his home lost power for about 48 hours beginning New Year’s Eve, then lost it again after Saturday for about 12 hours.

While the storms were severe across the state, McCann said Davis did not see unprecedented winds or flooding, adding to his concerns about the grid’s reliability.

He said he would like to see California’s utilities “distributing the system, so people can be more independent.”

“I think that’s probably a better solution rather than trying to build up stronger and stronger walls around a centralized grid,” McCann said.

Several others were quoted in the article offering microgrids as a solution to the ongoing challenge.

Widespread outages occurred in Woodland and Stockton despite winds not being exceptionally strong beyond recent experience. Given the widespread outages two years ago and the three “blue sky” multi hour outages we had in 2022 (and none during the September heat storm when 5,000 Davis customers lost power), I’m doubtful that PG&E is ready for what’s coming with climate change.

PG&E instead is proposing to invest up to $40 billion in the next eight years to protect service reliability for 4% of their customers via undergrounding wires in the foothills which will raise our rates up to 70% by 2030! There’s an alternative cost effective solution that would be 80% to 95% less sitting before the Public Utilities Commission but unlikely to be approved. There’s another opportunity to head off PG&E and send some of that money towards fixing our local grid coming up this summer under a new state law.

While winds have been strong, they have not been at the 99%+ range of experience that should lead to multiple catastrophic outcomes in short order. And having two major events within a week, plus the outage in December 2020 shows that these are not statistically unusual. We experienced similar fierce winds without such extended outages. Prior to 2020, Davis only experienced two extended outages in the previous two decades in 1998 and 2007. Clearly the lack of maintenance on an aging system has caught up with PG&E. PG&E should reimagine its rural undergrounding program to mitigate wildfire risk to use microgrids instead. That will free up most of the billons it plans to spend on less than 4% of its customer base to instead harden its urban grid.

The fundamental truth of marginal and average costs

Opponents of increased distributed energy resources who advocate for centralized power distribution insist that marginal costs are substantially below retail rates–as little as 6 cents per kilowatt-hour. Yet average costs generally continue to rise. For example, a claim has been repeatedly asserted that the marginal cost of transmission in California is less than a penny a kilowatt-hour. Yet PG&E’s retail transmission rate component went from 1.469 cents per kWh in 2013 to 4.787 cents in 2022. (SDG&E’s transmission rate is now 7.248 cents!) By definition, the marginal cost must be higher than 4.8 cents (and likely much higher) to increase that much.

Average costs equals the sum of marginal costs. Or inversely, marginal cost equals the incremental change in average costs when adding a unit of demand or supply. The two concepts are interlinked so that one must speak of one when speaking of the other.

The chart at the top of this post shows the relationship of marginal and average costs. Most importantly, it is not mathematically possible to have rising average costs when marginal costs are below average costs. So any assertion that transmission marginal costs are less than the average costs of transmission given that average costs are rising must be mathematically false.

Don’t get too excited about the fusion breakthrough yet

The U.S. Department of Energy announced on December 13 that a net positive fusion reaction achieved at the Lawrence Livermore National Laboratory. While impressive, this one last aside raises another substantial barrier:

“(T)he fusion reaction creates neutrons that significantly stress equipment, and could potentially destroy that equipment.”

While the momentary burst produced about 150% more energy than the input from the lasers, the lasers required about 150 times more energy than their output.

The technology won’t be ready for use until at least 2060, which is a decade after the goal of achieving net zero carbon emissions. That means that we need to plan and progress without relying on this energy source.

Do small modular reactors (SMR) hold real promise?

The economic analyses of the projected costs for small modular reactors (SMRs) appear to rely on two important assumptions: 1) that the plants will run at capacity factors of current nuclear plants (i.e., 70%-90%+) and 2) that enough will be built quickly enough to gain from “learning by doing” on scale as has occurred with solar, wind and battery technologies. The problem with these assumptions is that they require that SMRs crowd out other renewables with little impact on gas-fired generation.

To achieve low costs in nuclear power requires high capacity factors, that is the total electricity output relative to potential output. The Breakthrough Institute study, for example, assumes a capacity factor greater than 80% for SMRs. The problem is that the typical system load factor, that is the average load divided by the peak load, ranges from 50% to 60%. A generation capacity factor of 80% means that the plant is producing 20% more electricity than the system needs. It also means that other generation sources such as solar and wind will be pushed aside by this amount in the grid. Because the SMRs cannot ramp up and down to the same degree as load swings, not only daily but also seasonally, the system will still need load following fossil-fuel plants or storage. It is just the flip side of filling in for the intermittency of renewables.

To truly operate within the generation system in a manner that directly displaces fossil fuels, an SMR will have to operate at a 60% capacity factor or less. Accommodating renewables will lower that capacity factor further. Decreasing the capacity factor from 80% to 60% will increase the cost of an SMR by a third. This would increase the projected cost in the Breakthrough Institute report for 2050 from $41 per megawatt-hour to $55 per megawatt-hour. Renewables with storage are already beating this cost in 2022 and we don’t need to wait 30 years.

And the Breakthrough Institute study relies questionable assumptions about learning by doing in the industry. First, it assumes that conventional nuclear will experience a 5% learning benefit (i.e., costs will drop 5% for each doubling of capacity). In fact, the industry shows a negative learning rate--costs per kilowatt have been rising as more capacity is built. It is not clear how the SMR industry will reverse this trait. Second, the learning by doing effect in this industry is likely to be on a per plant rather than per megawatt or per turbine basis as has been the case with solar and turbines. The very small unit size for solar and turbine allows for off site factory production with highly repetitive assembly, whereas SMRs will require substantial on-site fabrication that will be site specific. SMR learning rates are more likely to follow those for building construction than other new energy technologies.

Finally, the report does not discuss the risk of catastrophic accidents. The probability of a significant accident is about 1 per 3,700 reactor operating years. Widespread deployment of SMRs will vastly increase the annual risk because that probability is independent of plant size. Building 1,000 SMRs could increase the risk to such a level that these accidents could be happening once every four years.

The Fukushima nuclear plant catastrophe is estimated to have cost $300 billion to $700 billion. The next one could cost in excess of $1 trillion. This risk adds a cost of $11 to $27 per megawatt-hours.

Adding these risk costs on top of the adjusted capacity factor, the cost ranges rises to $65 to $82 per megawatt-hour.

The real lessons from California’s 2000-01 electricity crisis and what they mean for today’s markets

The recent reliability crises for the electricity markets in California and Texas ask us to reconsider the supposed lessons from the most significant extended market crisis to date– the 2000-01 California electricity crisis. I wrote a paper two decades ago, The Perfect Mess, that described the circumstances leading up to the event. There have been two other common threads about supposed lessons, but I do not accept either as being true solutions and are instead really about risk sharing once this type of crisis ensues rather than being useful for preventing similar market misfunctions. Instead, the real lesson is that load serving entities (LSEs) must be able to sign long-term agreements that are unaffected and unfettered directly or indirectly by variations in daily and hourly markets so as to eliminate incentives to manipulate those markets.

The first and most popular explanation among many economists is that consumers did not see the swings in the wholesale generation prices in the California Power Exchange (PX) and California Independent System Operator (CAISO) markets. In this rationale, if consumers had seen the large increases in costs, as much as 10-fold over the pre-crisis average, they would have reduced their usage enough to limit the gains from manipulating prices. Consumers should have shouldered the risks in the markets in this view and their cumulative creditworthiness could have ridden out the extended event.

This view is not valid for several reasons. The first and most important is that the compensation to utilities for stranded assets investment was predicated on calculating the difference between a fixed retail rate and the utilities cost of service for transmission and distribution plus the wholesale cost of power in the PX and CAISO markets. Until May 2000, that difference was always positive and the utilities were well on the way to collecting their Competition Transition Charge (CTC) in full before the end of the transition period March 31, 2002. The deal was if the utilities were going to collect their stranded investments, then consumers rates would be protected for that period. The risk of stranded asset recovery was entirely the utilities’ and both the California Public Utilities Commission in its string of decisions and the State Legislature in Assembly Bill 1890 were very clear about this assignment.

The utilities had chosen to support this approach linking asset value to ongoing short term market valuation over an upfront separation payment proposed by Commissioner Jesse Knight. The upfront payment would have enabled linking power cost variations to retail rates at the outset, but the utilities would have to accept the risk of uncertain forecasts about true market values. Instead, the utilities wanted to transfer the valuation risk to ratepayers, and in return ratepayers capped their risk at the current retail rates as of 1996. Retail customers were to be protected from undue wholesale market risk and the utilities took on that responsibility. The utilities walked into this deal willingly and as fully informed as any party.

As the transition period progressed, the utilities transferred their collected CTC revenues to their respective holding companies to be disbursed to shareholders instead of prudently them as reserves until the end of the transition period. When the crisis erupted, the utilities quickly drained what cash they had left and had to go to the credit markets. In fact, if they had retained the CTC cash, they would not have had to go the credit markets until January 2001 based on the accounts that I was tracking at the time and PG&E would not have had a basis for declaring bankruptcy.

The CTC left the market wide open to manipulation and it is unlikely that any simple changes in the PX or CAISO markets could have prevented this. I conducted an analysis for the CPUC in May 2000 as part of its review of Pacific Gas & Electric’s proposed divestiture of its hydro system based on a method developed by Catherine Wolfram in 1997. The finding was that a firm owning as little as 1,500 MW (which included most merchant generators at the time) could profitably gain from price manipulation for at least 2,700 hours in a year. The only market-based solution was for LSEs including the utilities to sign longer-term power purchase agreements (PPAs) for a significant portion (but not 100%) of the generators’ portfolios. (Jim Sweeney briefly alludes to this solution before launching to his preferred linkage of retail rates and generation costs.)

Unfortunately, State Senator Steve Peace introduced a budget trailer bill in June 2000 (as Public Utilities Code Section 355.1, since repealed) that forced the utilities to sign PPAs only through the PX which the utilities viewed as too limited and no PPAs were consummated. The utilities remained fully exposed until the California Department of Water Resources took over procurement in January 2001.

The second problem was a combination of unavailable technology and billing systems. Customers did not yet have smart meters and paper bills could lag as much as two months after initial usage. There was no real way for customers to respond in near real time to high generation market prices (even assuming that they would have been paying attention to such an obscure market). And as we saw in the Texas during Storm Uri in 2021, the only available consumer response for too many was to freeze to death.

This proposed solution is really about shifting risk from utility shareholders to ratepayers, not a realistic market solution. But as discussed above, at the core of the restructuring deal was a sharing of risk between customers and shareholders–a deal that shareholders failed to keep when they transferred all of the cash out of their utility subsidiaries. If ratepayers are going to take on the entire risk (as keeps coming up) then either authorized return should be set at the corporate bond debt rate or the utilities should just be publicly owned.

The second explanation of why the market imploded was that the decentralization created a lack of coordination in providing enough resources. In this view, the CDWR rescue in 2001 righted the ship, but the exodus of the community choice aggregators (CCAs) again threatens system integrity again. The preferred solution for the CPUC is now to reconcentrate power procurement and management with the IOUs, thus killing the remnants of restructuring and markets.

The problem is that the current construct of the PCIA exit fee similarly leaves the market open to potential manipulation. And we’ve seen how virtually unfettered procurement between 2001 and the emergence of the CCAs resulted in substantial excess costs.

The real lessons from the California energy crisis are two fold:

  • Any stranded asset recovery must be done as a single or fixed payment based on the market value of the assets at the moment of market formation. Any other method leaves market participants open to price manipulation. This lesson should be applied in the case of the exit fees paid by CCAs and customers using distributed energy resources. It is the only way to fairly allocate risks between customers and shareholders.
  • LSEs must be able unencumbered in signing longer term PPAs, but they also should be limited ahead of time in the ability to recover stranded costs so that they have significant incentives to prudently procure resources. California’s utilities still lack this incentive.

Close Diablo Canyon? More distributed solar instead

More calls for keeping Diablo Canyon have coming out in the last month, along with a proposal to match the project with a desalination project that would deliver water to somewhere. (And there has been pushback from opponents.) There are better solutions, as I have written about previously. Unfortunately, those who are now raising this issue missed the details and nuances of the debate in 2016 when the decision was made, and they are not well informed about Diablo’s situation.

One important fact is that it is not clear whether continued operation of Diablo is safe. Unit No. 1 has one of the most embrittled containment vessels in the U.S. that is at risk during a sudden shutdown event.

Another is that the decision would require overriding a State Water Resources Control Board decision that required ending the use of once-through cooling with ocean water. That cost was what led to the closure decision, which was 10 cents per kilowatt-hour at current operational levels and in excess of 12 cents in more likely operations.

So what could the state do fairly quickly for 12 cents per kWh instead? Install distributed energy resources focused on commercial and community-scale solar. These projects cost between 6 and 9 cents per kWh and avoid transmission costs of about 4 cents per kWh. They also can be paired with electric vehicles to store electricity and fuel the replacement of gasoline cars. Microgrids can mitigate wildfire risk more cost effectively than undergrounding, so we can save another $40 billion there too. Most importantly they can be built in a matter of months, much more quickly than grid-scale projects.

As for the proposal to build a desalination plant, pairing one with Diablo would both be overkill and a logistical puzzle. The Carlsbad plant produces 56,000 acre-feet annually for San Diego County Water Agency. The Central Coast where Diablo is located has a State Water Project allocation of 45,000 acre-feet which is not even used fully now. That plant uses 35 MW or 1.6% of Diablo’s output. A plant built to use all of Diablo’s output could produce 3.5 million acre-feet, but the State Water Project would need to be significantly modified to move the water either back to the Central Valley or beyond Santa Barbara to Ventura. All of that adds up to a large cost on top of what is already a costly source of water of $2,500 to $2,800 per acre-foot.

What rooftop solar owners understand isn’t mythological

Severin Borenstein wrote another blog attacking rooftop solar (a pet peeve of his at least a decade because these weren’t being installed in “optimal” locations in the state) entitled “Myths that Solar Owners Tell Themselves.” Unfortunately he set up a number of “strawman” arguments that really have little to do with the actual issues being debated right now at the CPUC. Here’s responses to each his “myths”:

Myth #1 – Customers are paid only 4 cents per kWh for exports: He’s right in part, but then he ignores the fact that almost all of the power sent out from rooftop panels are used by their neighbors and never gets to the main part of the grid. The utility is redirecting the power down the block.

Myth #2 – The utility sells the power purchased at retail back to other customers at retail so the net so it’s a wash: Borenstein’s claim ignores the fact that when the NEM program began the utilities were buying power that cost more than the retail rate at the time. During NEM 1.0 the IOUs were paying in excess of 10c/kwh for renewable power (RPS) power purchase agreements (PPAs). Add the 4c/kWh for transmission and that’s more than the average rate of 13c/kWh that prevailed during that time. NEM 2.0 added a correction for TOU pricing (that PG&E muffled by including only the marginal generation cost difference by TOU rather than scaling) and that adjusted the price some. But those NEM customers signed up not knowing what the future retail price would be. That’s the downside of failing to provide a fixed price contract tariff option for solar customers back then. So now the IOUs are bearing the consequences of yet another bad management decision because they were in denial about what was coming.

Myth #3 – Rooftop solar is about disrupting the industry: Here Borenstein appears to be unaware of the Market Street Railway case that states that utilities are not protected from technological change. Protecting companies from the consequences of market forces is corporate socialism. If we’re going to protect shareholders from risk (and its even 100% protection), then the grid should be publicly owned instead. Sam Insull set up the regulatory scam a century ago arguing that income assurance was needed for grid investment, and when the whole scheme collapsed in the Depression, the Public Utility Holding Company Act of 1935 (PUHCA)was passed. Shareholders need to pick their poison—either be exposed to risk or transfer their assets public ownership, but wealthy shareholders should not be protected.

Myth #3A – Utilities made bad investments and should bear the risks: Borenstein is arguing since the utilities have run the con for the last decade and gotten approval from the CPUC, they should be protected. Yet I submitted testimony repeatedly starting in 2010 both PG&E’s and SCE’s GRCs that warned that they had overforecasted load growth. I was correct—statewide retail sales are about the same today as they were in 2006. Grid investment would have been much different if those companies had listened and corrected their forecasts. Further the IOUs know how to manipulate their regulatory filings to ensure that they still get their internally targeted income. Decoupling that ensures that the utility receives its guaranteed income regardless of sales further shields them. From 1994 to 2017, PG&E hit its average allowed rate of return within 0.1%. (More on this later.) A UCB economics graduate student found that the return on equity is up to 4% too high (consistent with analysis I’ve done).

Myth #3B – Time to take away the utility’s monopoly: No, we no longer need to have monopoly electric service. The same was said about telecommunications three decades ago. Now we have multiple entities vying for our dollars. The CPUC conducted a study in 1999 that was included in PG&E’s GRC proposed decision (thanks to the late Richard Bilas) that showed that economies of scale disappeared after several hundred thousand customers (and that threshold is likely lower now.) And microgrids are becoming cost effective, especially as PG&E’s rates look like they will surpass 30 cents per kWh by 2026.

Myth #4 – There aren’t barriers to the poor putting panels on their roofs: First, the barriers are largely regulatory, not financial. The CPUC has erected them to prevent aggregation of low-income customers to be able to buy into larger projects that serve these communities.

Second, there are many market mechanisms today where those with lower income are offered products or services at a higher long term price in return for low or no upfront costs. Are we also going to heavily tax car purchases because car leasing is effectively more expensive? What about house ownership vs. rentals? There are issues to address with equity, but to zero in on one small example while ignoring the much wider prevalence sets  up another strawman argument.

Further, there are better ways to address the inequity in rooftop solar distribution. That inequity isn’t occurring duo to affordability but rather because of split incentives between landlords and tenants.

A much easier and more direct fix would be to modify Public Utilities Code Sections 218 to allow local sales among customers or by landlords or homeowner associations to tenants and 739.5 to allow more flexibility in pricing those sales. But allowing those changes will require that the utilities give up iron-fisted control of electricity production.

Myth #5 – Rooftop solar is the only thing that makes it cost-effective to electrify: Borenstein focuses on the what source of high rates. Rooftop solar might be raising rates, but it probably delivered as much in offsetting savings. At most those customers increased rates by 10%, but utility rates are 70-100% above the direct marginal costs of service. The sources of that difference are manifest. PG&E has filed in its 2023 GRC a projected increase in the average standard residential rate to 38 cents per kWh by 2026, and perhaps over 40 cents once undergrounding to mitigate wildfire is included. The NREL studies on microgrids show that individual home microgrids cost about 34 cents per kWh now and battery storage prices are still dropping. Exiting the grid starts to look a lot more attractive.

Maybe if we look only at the status quo as unchanging and accept all of the utilities’ claims about their “necessary” management decisions and the return required to attract investors, then these arguments might hold water. But none of these factors are true based on the empirical work presented in many forums including at the CPUC over the last decade. These beliefs are not so mythological.

Finally, Borenstein finishes with “(a)nd we all need to be open to changing our minds as a result of changing technology and new data.” Yet he has been particularly unyielding on this issue for years, and has not reexamined his own work on electricity markets from two decades ago. The meeting of open minds requires a two-way street.