“Fixed costs” do not mean “fixed charges”

The California Public Utilities Commission has issued a proposed decision that calls for a monthly fixed charge of $24 for most customers. There is no basis in economic principles that calls for collecting “fixed costs” (too often misidentified) in a fixed charge. This so-called principle gets confused with the second-best solution for regulated monopoly pricing where the monopoly has declining marginal costs that are below average costs which has a two part tariff of a lump sum payment and variable prices at marginal costs. (And Ramsey pricing, which California uses a derivative of that in equal percent marginal cost (EPMC) allocation, also is a second-best efficient pricing method that relies solely on volumetric units.) The evidence for a natural monopoly is that average costs are falling over time as sales expand.

However, as shown by the chart above for PG&E’s distribution and transmission (and SCE’s looks similar), average costs as represented in retail rates are rising. This means that marginal costs must be above average costs. (If this isn’t true then a fully detailed explanation is required—none has been presented so far.) The conditions for regulated monopoly pricing with a lump sum or fixed charge component do not exist in California.

Using the logic that fixed costs should be collected through fixed charges, then the marketplace would be rife with all sorts of entry, access and connection fees at grocery stores, nail salons and other retail outlets as well as restaurants, car dealers, etc. to cover the costs of ownership and leases, operational overhead and other invariant costs. Simply put that’s not the case. All of those producers and providers price on a per unit basis because that’s how a competitive market works. In those markets, customers have the ability to choose and move among sellers, so the seller is forced to recover costs on a single unit price. You might respond, well, cell providers have monthly fixed charges. But that’s not true—those are monthly connection fees that represent the marginal cost of interconnecting to a network. And customers have the option of switching (and many do) to a provider with a lower monthly fee. The unit of consumption is interconnection, which is a longer period than the single momentary instance that economists love because they can use calculus to derive it.

Utility regulation is supposed to mimic the outcome of competitive markets, including pricing patterns. That means that fixed cost recovery through a fixed charge must be limited to a customer-dedicated facility which cannot be used by another customer. That would be the service connection, which has a monthly investment recovery cost of about $10 to $15/month. Everything else must be priced on a volumetric basis as would be in a competitive market. (And the rise of DERs is now introducing true competition into this marketplace.)

The problem is that we’re missing the other key aspect of competitive markets—that investors risk losing their investments due to poor management decisions. Virtually all of the excess stranded costs for California IOUs are due poor management, not “state mandates.” You can look at the differences between in-state IOU and muni rates to see the evidence. (And that an IOU has been convicted of killing nearly 100 people due to malfeasance further supports that conclusion.)

There are alternative solutions to California’s current dilemma but utility shareholders must accept their portion of the financial burden. Right now they are shielded completely as evidenced by record profits and rising share prices.

Opinion: What’s wrong with basing electricity fees on household incomes

I coauthored this article in the Los Angeles Daily News with Ahmad Faruqui and Andy Van Horn. We critique the proposed income-graduated fixed charge (IGFC) being considered at the California Public Utilities Commission.

Decommissioning Klamath River dams comes to fruition

In 2006, M.Cubed prepared a report for the California Energy Commission that showed PacifiCorp, owner of the four dams on the Klamath River, would be financially indifferent between decommissioning or relicensing the projects with the Federal Energy Regulatory Commission. That conclusion has since been reinforced by a 75% decline in replacement renewable power costs since then. That study opened the door for all parties to negotiate an agreement in 2010 to move forward with decommissioning.

In 2015, I wrote here about how that agreement was in peril. I tracked the progress of the situation in the comments in that post.

Fortunately, those hurdles were overcome and the decommissioning began this year in 2023. Copco 2 has now been completely removed and the project is moving on to the next dam.

After all four dams are taken we can see how successful this approach might be in restoring rivers on the West Coast.

This is the initial report on the economics of decommissioning versus relicense conducted for the California Energy Commission.

Retail electricity rate reform will not solve California’s problems

Meredith Fowlie wrote this blog on the proposal to drastically increase California utilities’ residential fixed charges at the Energy Institute at Haas blog. I posted this comment (with some additions and edits) in response.

First, infrastructure costs are responsive to changes in both demand and added generation. It’s just that those costs won’t change for a customer tomorrow–it will take a decade. Given how fast transmission retail rates have risen and have none of the added fixed costs listed here, the marginal cost must be substantially above the current average retail rates of 4 to 8 cents/kWh.

Further, if a customer is being charged a fixed cost for capacity that is being shared with other customers, e.g., distribution and transmission wires, they should be able to sell that capacity to other customers on a periodic basis. While many economists love auctions, the mechanism with the lowest ancillary transaction costs is a dealer market akin a grocery store which buys stocks of goods and then resells. (The New York Stock Exchange is a type of dealer market.) The most likely unit of sale would be in cents per kWh which is the same as today. In this case, the utility would be the dealer, just as today. So we are already in the same situation.

Airlines are another equally capital intensive industry. Yet no one pays a significant fixed charge (there are some membership clubs) and then just a small incremental charge for fuel and cocktails. Fares are based on a representative long run marginal cost of acquiring and maintaining the fleet. Airlines maintain a network just as utilities. Economies of scale matter in building an airline. The only difference is that utilities are able to monopolistically capture their customers and then appeal to state-sponsored regulators to impose prices.

Why are California’s utility rates 30 to 50% or more above the current direct costs of serving customers? The IOUs, and PG&E in particular, over procured renewables in the 2010-2012 period at exorbitant prices (averaging $120/MWH) in part in an attempt to block entry of CCAs. That squandered the opportunity to gain the economics benefits from learning by doing that led to the rapid decline in solar and wind prices over the next decade. In addition, PG&E refused to sell a part of its renewable PPAs to the new CCAs as they started up in the 2014-2017 period. On top of that, PG&E ratepayers paid an additional 50% on an already expensive Diablo Canyon due to the terms of the 1996 Settlement Agreement. (I made the calculations during that case for a client.) And on the T&D side, I pointed out beginning in 2010 that the utilities were overforecasting load growth and their recorded data showed stagnant loads. The peak load from 2006 was the record until 2022 and energy loads have remained largely constant, even declining over the period. The utilities finally started listening the last couple of years but all of that unneeded capital is baked into rates. All of these factors point not to the state or even the CPUC (except as an inept monitor) as being at fault, but rather to the utilities’ mismanagement.

Using Southern California Edison’s (SCE) own numbers, we can illustrate the point. SCE’s total bundled marginal costs in its rate filing are 10.50 cents per kWh for the system and 13.64 cents per kWh for residential customers. In comparison, SCE’s average system rate is 17.62 cents per kWh or 68% higher than the bundled marginal cost, and the average residential rate of 22.44 cents per kWh is 65% higher. From SCE’s workpapers, these cost increases come primarily from four sources.

  1. First, about 10% goes towards various public purpose programs that fund a variety of state-initiated policies such as energy efficiency and research. Much of this should be largely funded out of the state’s General Fund as income distribution through the CARE rate instead. And remember that low income customers are already receiving a 35% discount on rates.
  2. Next, another 10% comes roughly from costs created two decades ago in the wake of the restructuring debacle. The state has now decreed that this revenue stream will instead be used to pay for the damages that utilities have caused with wildfires. Importantly, note that wildfire costs of any kind have not actually reached rates yet. In addition, there are several solutions much less costly than the undergrounding proposed by PG&E and SDG&E, including remote rural microgrids.
  3. Approximately 15% is from higher distribution costs, some of which have been created by over-forecasting load growth over the last 15 years; loads have remained stagnant since 2006.
  4. And finally, around 33% is excessive generation costs caused by paying too much for purchased power agreements signed a decade ago.

An issue raised as rooftop solar spreads farther is the claim that rooftop solar customers are not paying their fair share and instead are imposing costs on other customers, who on average have lower incomes than those with rooftop solar. Yet the math behind the true rate burden for other customers is quite straightforward—if 10% of the customers are paying essentially zero (which they are actually not), the costs for the remaining 90% of the customers cannot go up more than 11% [100%/(100%-10%) = 11% ]. If low-income customers pay only 70% of this—the 11%– then their bills might go up about 8%–hardly a “substantial burden.” (70% x 11% = 7.7%)

As for aligning incentives for electrification, we proposed a more direct alternative on behalf of the Local Government Sustainable Energy Coalition where those who replace a gas appliance or furnace with an electric receive an allowance (much like the all-electric baseline) priced at marginal cost while the remainder is priced at the higher fully-loaded rate. That would reduce the incentive to exit the grid when electrifying while still rewarding those who made past energy efficiency and load reduction investments.

The solution to high rates cannot come from simple rate design; as Old Surfer Dude points out, wealthy customers are just going to exit the grid and self provide. Rate design is just rearranging the deck chairs. The CPUC tried the same thing in the late 1990s with telcom on the assumption that customers would stay put. Instead customers migrated to cell phones and dropped their land lines. The real solution is going to require some good old fashion capitalism with shareholders and associated stakeholders absorbing the costs of their mistakes and greed.

Obstacles to nuclear power, but how much do we really need it?

Jonathan Rauch writes in the Atlantic Monthly about the innovations in nuclear power technology that might overcome its troubled history. He correctly identifies the core of the problem for nuclear power, although it extends even further than he acknowledges. Recent revelations about the fragility of France’s once-vaunted nuclear fleet illustrates deeper management problems with the technology. Unfortunately he is too dismissive of the safety issues and even the hazardous duties that recovery crews experienced at both Chernobyl and Fukushima. Both of those accidents cost those nations hundreds of billions of dollars. As a result of these issues, nuclear power around the world now costs over 10 cents per kilowatt-hour. Grid-scale solar and wind power in contrast costs less than four cents and even adding storage no more than doubles that cost. And this ignores the competition of small-scale distributed energy resources (DER) that could break the utility monopoly required to pay for nuclear power.

Yet Rauch’s biggest error is in asserting without sufficient evidence that nuclear power is required to achieve greenhouse gas emission reductions. Numerous studies (including for California) show that we can get to a 90% emission free and beyond power grid with current technologies and no nuclear. We have two decades to figure out how to get to the last 10% or less, or to determine if we even need to.

The problem with the new nuclear technologies such as small modular reactors (SMR) is that they must be built on a wide scale as a high proportion of the power supply to achieve the technological cost reductions of the type that we have seen for solar and batteries. And to get a low enough cost per kilowatt-hour, those units must run constantly in baseload mode, which only exacerbates the variable output issue for renewables instead of solving it. Running in a load following mode will increase the cost per kilowatt-hour by 50%.

We should continue research in this technology because there may be a breakthrough that solves these dilemmas. But we should not plan on needing it to save our future. We have been disappointed too many times already by empty promises from this industry.

Paradigm change: building out the grid with renewables requires a different perspective

Several observers have asserted that we will require baseload generation, probably nuclear, to decarbonize the power grid. Their claim is that renewable generation isn’t reliable enough and too distant from load centers to power an electrified economy.

Problem is that this perspective relies on a conventional approach to understanding and planning for future power needs. That conventional approach generally planned to meet the highest peak loads of the year with a small margin and then used the excess capacity to produce the energy needed in the remainder of the hours. This premise was based on using consumable fuel to store energy for use in hours when electricity was needed.

Renewables such as solar and wind present a different paradigm. Renewables capture and convert energy to electricity as it becomes available. The next step is to stored that energy using technologies such as batteries. That means that the system needs to be built to meet energy requirements, not peak loads.

Hydropower-dominated systems have already been built in this manner. The Pacific Northwest’s complex on the Columbia River and its branches for half a century had so much excess peak capacity that it could meet much of California’s summer demand. Meeting energy loads during drought years was the challenge. The Columbia River system could store up to 40% of the annual runoff in its reservoirs to assure sufficient supply.

For solar and wind, we will build enough capacity that is multiples of the annual peak load so that we can generate enough energy to meet those loads that occur when the sun isn’t shining and wind isn’t blowing. For example in a system relying on solar power, the typical demand load factor is 60%, i.e., the average load is 60% of the peak or maximum load. A typical solar photovoltaic capacity factor is 20%, i.e., it generates an average output that is 20% of the peak output. In this example system, the required solar capacity would be three times the peak demand on the system to produce sufficient stored electricity. The amount of storage capacity would equal the peak demand (plus a small reserve margin) less the amount of expected renewable generation during the peak hour.

As a result, comparing the total amount of generation capacity installed to the peak demand becomes irrelevant. Instead we first plan for total energy need and then size the storage output to meet the peak demand. (And that storage may be virtually free as it is embodied in our EVs.) This turns the conventional planning paradigm on its head.

In the LA Times – looking for alternative solutions to storm outages

I was interviewed by a Los Angeles Times reporter about the recent power outages in Northern California as result of the wave of storms. Our power went out for 48 hours New Year’s Eve and again for 12 hours the next weekend:

After three days without power during this latest storm series, Davis resident Richard McCann said he’s seriously considering implementing his own microgrid so he doesn’t have to rely on PG&E.

“I’ve been thinking about it,” he said. McCann, whose work focuses on power sector analysis, said his home lost power for about 48 hours beginning New Year’s Eve, then lost it again after Saturday for about 12 hours.

While the storms were severe across the state, McCann said Davis did not see unprecedented winds or flooding, adding to his concerns about the grid’s reliability.

He said he would like to see California’s utilities “distributing the system, so people can be more independent.”

“I think that’s probably a better solution rather than trying to build up stronger and stronger walls around a centralized grid,” McCann said.

Several others were quoted in the article offering microgrids as a solution to the ongoing challenge.

Widespread outages occurred in Woodland and Stockton despite winds not being exceptionally strong beyond recent experience. Given the widespread outages two years ago and the three “blue sky” multi hour outages we had in 2022 (and none during the September heat storm when 5,000 Davis customers lost power), I’m doubtful that PG&E is ready for what’s coming with climate change.

PG&E instead is proposing to invest up to $40 billion in the next eight years to protect service reliability for 4% of their customers via undergrounding wires in the foothills which will raise our rates up to 70% by 2030! There’s an alternative cost effective solution that would be 80% to 95% less sitting before the Public Utilities Commission but unlikely to be approved. There’s another opportunity to head off PG&E and send some of that money towards fixing our local grid coming up this summer under a new state law.

While winds have been strong, they have not been at the 99%+ range of experience that should lead to multiple catastrophic outcomes in short order. And having two major events within a week, plus the outage in December 2020 shows that these are not statistically unusual. We experienced similar fierce winds without such extended outages. Prior to 2020, Davis only experienced two extended outages in the previous two decades in 1998 and 2007. Clearly the lack of maintenance on an aging system has caught up with PG&E. PG&E should reimagine its rural undergrounding program to mitigate wildfire risk to use microgrids instead. That will free up most of the billons it plans to spend on less than 4% of its customer base to instead harden its urban grid.

Per Capita: Climate needs more than just good will

I wrote this guest column in the Davis Enterprise about the City’s Climate Action and Adaptation Plan. (Thank you John Mott-Smith for extending the privilege.)

Dear Readers, the guest column below was written by Richard McCann, a Davis resident and expert on energy and climate action plans.

————

The city of Davis is considering its first update of its Climate Action and Adaptation Plan since 2010 with a 2020-2040 Plan. The city plans to update the CAAP every couple of years to reflect changing conditions, technologies, financing options, laws and regulations.

The plan does not and cannot achieve a total reduction in greenhouse gas emissions simply because we do not control all of the emission sources — almost three-quarters of our emissions are from vehicles that are largely regulated by state and federal laws. But it does lay out a means to putting a serious dent in the overall amount. 

The CAAP offers a promising future and accepts that we have to protect ourselves as the climate worsens. Among the many benefits we can look forward to are avoiding volatile gas prices while driving cleaner, quieter cars; faster and more controllable cooking while eliminating toxic indoor air; and air conditioning and heating without having to make two investments while paying less.

To better adapt, we’ll have a greener landscape, filtered air for rental homes, and community shelter hubs powered by microgrids to ride out more frequent extreme weather.

We have already seen that adding solar panels raises the value of a house by as much as $4,000 per installed kilowatt (so a 5 kilowatt system adds $20,000). We can expect similar increases in home values with these new technologies due to the future savings, safety and convenience. 

Several state and federal laws and rules foretell what is coming. By 2045 California aims to be at zero net GHG emissions. That will require retiring all of the residential and commercial gas distribution lines. PG&E has already started a program to phase out its lines. A change in state rules will remove from the market several large natural gas appliances such as furnaces by 2030.

In addition, PG&E will no longer offer subsidies to developers to install gas lines to new homes starting next year. The U.S. Environmental Protection Agency appears poised to push further the use of electric appliances in areas with poor air quality such the Sacramento Valley. (Renewable gas and hydrogen will be too expensive and there won’t be enough to go around.)

Without sales to new customers or for replaced furnaces, the cost of maintaining the gas system will rise substantially so switching to electricity for cooking and water heating will save even more money. The CAAP anticipates this transition by having residents begin switching earlier. 

In addition, the recently enacted federal Inflation Reduction Act offers between $400 and $800 billion into funding these types of changes. The California Energy Commission’s budget for this year went from $1 billion to $10 billion to finance these transitions. The CAAP lays out a process for acquiring these financial sources for Davis and its residents. 

That said, some have objected to the CAAP as being too draconian and infringing on personal choices. The fact is that we are now in the midst of a climate emergency — the City Council endorsed this concern with a declaration in 2019. We’re already behind schedule to head off the worst of the threatening impacts. 

We won’t be able to rely solely on voluntary actions to achieve the reductions we need. That the CAAP has to include these actions proves that people have not been acting on their own despite a decade of cajoling since the last CAAP. While we’ve been successful at encouraging voluntary compliance with easy tasks like recycling, we’ve used mandatory permitting requirements to gain compliance with various building standards including energy efficiency measures. (These are usually enforced at point-of-sale of a house.)

We have a choice of mandatory ordinances, incentives through taxes or fees, and subsidies from grants and funds — voluntary just won’t deliver what’s needed. We might be able to financially help those least able to afford changing stoves, heaters or cars, but those funds will be limited. The ability to raise taxes or fees is restricted due to various provisions in the state’s constitution. So we are left with mandatory measures, applied at the most opportune moments. 

Switching to electricity for cooking and water heating may involve some costs, some or most of which will be offset by lower energy costs (especially as gas rates go up.) If you have an air conditioner, you’re likely already set up for a heat pump to replace your furnace — it’s a simple swap. Even so, you can avoid some costs by using a 120-volt induction cooktop instead of 240 volts, and installing a circuit-sharing plug or breaker for large loads to avoid panel upgrades. 

The CAAP will be fleshed out and evolve for at least the next decade. Change is coming and will be inevitable given the dire situation. But this change gives us opportunities to clean our environment and make our city more livable.  

The fundamental truth of marginal and average costs

Opponents of increased distributed energy resources who advocate for centralized power distribution insist that marginal costs are substantially below retail rates–as little as 6 cents per kilowatt-hour. Yet average costs generally continue to rise. For example, a claim has been repeatedly asserted that the marginal cost of transmission in California is less than a penny a kilowatt-hour. Yet PG&E’s retail transmission rate component went from 1.469 cents per kWh in 2013 to 4.787 cents in 2022. (SDG&E’s transmission rate is now 7.248 cents!) By definition, the marginal cost must be higher than 4.8 cents (and likely much higher) to increase that much.

Average costs equals the sum of marginal costs. Or inversely, marginal cost equals the incremental change in average costs when adding a unit of demand or supply. The two concepts are interlinked so that one must speak of one when speaking of the other.

The chart at the top of this post shows the relationship of marginal and average costs. Most importantly, it is not mathematically possible to have rising average costs when marginal costs are below average costs. So any assertion that transmission marginal costs are less than the average costs of transmission given that average costs are rising must be mathematically false.

Don’t get too excited about the fusion breakthrough yet

The U.S. Department of Energy announced on December 13 that a net positive fusion reaction achieved at the Lawrence Livermore National Laboratory. While impressive, this one last aside raises another substantial barrier:

“(T)he fusion reaction creates neutrons that significantly stress equipment, and could potentially destroy that equipment.”

While the momentary burst produced about 150% more energy than the input from the lasers, the lasers required about 150 times more energy than their output.

The technology won’t be ready for use until at least 2060, which is a decade after the goal of achieving net zero carbon emissions. That means that we need to plan and progress without relying on this energy source.