Tag Archives: market design

The real lessons from California’s 2000-01 electricity crisis and what they mean for today’s markets

The recent reliability crises for the electricity markets in California and Texas ask us to reconsider the supposed lessons from the most significant extended market crisis to date– the 2000-01 California electricity crisis. I wrote a paper two decades ago, The Perfect Mess, that described the circumstances leading up to the event. There have been two other common threads about supposed lessons, but I do not accept either as being true solutions and are instead really about risk sharing once this type of crisis ensues rather than being useful for preventing similar market misfunctions. Instead, the real lesson is that load serving entities (LSEs) must be able to sign long-term agreements that are unaffected and unfettered directly or indirectly by variations in daily and hourly markets so as to eliminate incentives to manipulate those markets.

The first and most popular explanation among many economists is that consumers did not see the swings in the wholesale generation prices in the California Power Exchange (PX) and California Independent System Operator (CAISO) markets. In this rationale, if consumers had seen the large increases in costs, as much as 10-fold over the pre-crisis average, they would have reduced their usage enough to limit the gains from manipulating prices. Consumers should have shouldered the risks in the markets in this view and their cumulative creditworthiness could have ridden out the extended event.

This view is not valid for several reasons. The first and most important is that the compensation to utilities for stranded assets investment was predicated on calculating the difference between a fixed retail rate and the utilities cost of service for transmission and distribution plus the wholesale cost of power in the PX and CAISO markets. Until May 2000, that difference was always positive and the utilities were well on the way to collecting their Competition Transition Charge (CTC) in full before the end of the transition period March 31, 2002. The deal was if the utilities were going to collect their stranded investments, then consumers rates would be protected for that period. The risk of stranded asset recovery was entirely the utilities’ and both the California Public Utilities Commission in its string of decisions and the State Legislature in Assembly Bill 1890 were very clear about this assignment.

The utilities had chosen to support this approach linking asset value to ongoing short term market valuation over an upfront separation payment proposed by Commissioner Jesse Knight. The upfront payment would have enabled linking power cost variations to retail rates at the outset, but the utilities would have to accept the risk of uncertain forecasts about true market values. Instead, the utilities wanted to transfer the valuation risk to ratepayers, and in return ratepayers capped their risk at the current retail rates as of 1996. Retail customers were to be protected from undue wholesale market risk and the utilities took on that responsibility. The utilities walked into this deal willingly and as fully informed as any party.

As the transition period progressed, the utilities transferred their collected CTC revenues to their respective holding companies to be disbursed to shareholders instead of prudently them as reserves until the end of the transition period. When the crisis erupted, the utilities quickly drained what cash they had left and had to go to the credit markets. In fact, if they had retained the CTC cash, they would not have had to go the credit markets until January 2001 based on the accounts that I was tracking at the time and PG&E would not have had a basis for declaring bankruptcy.

The CTC left the market wide open to manipulation and it is unlikely that any simple changes in the PX or CAISO markets could have prevented this. I conducted an analysis for the CPUC in May 2000 as part of its review of Pacific Gas & Electric’s proposed divestiture of its hydro system based on a method developed by Catherine Wolfram in 1997. The finding was that a firm owning as little as 1,500 MW (which included most merchant generators at the time) could profitably gain from price manipulation for at least 2,700 hours in a year. The only market-based solution was for LSEs including the utilities to sign longer-term power purchase agreements (PPAs) for a significant portion (but not 100%) of the generators’ portfolios. (Jim Sweeney briefly alludes to this solution before launching to his preferred linkage of retail rates and generation costs.)

Unfortunately, State Senator Steve Peace introduced a budget trailer bill in June 2000 (as Public Utilities Code Section 355.1, since repealed) that forced the utilities to sign PPAs only through the PX which the utilities viewed as too limited and no PPAs were consummated. The utilities remained fully exposed until the California Department of Water Resources took over procurement in January 2001.

The second problem was a combination of unavailable technology and billing systems. Customers did not yet have smart meters and paper bills could lag as much as two months after initial usage. There was no real way for customers to respond in near real time to high generation market prices (even assuming that they would have been paying attention to such an obscure market). And as we saw in the Texas during Storm Uri in 2021, the only available consumer response for too many was to freeze to death.

This proposed solution is really about shifting risk from utility shareholders to ratepayers, not a realistic market solution. But as discussed above, at the core of the restructuring deal was a sharing of risk between customers and shareholders–a deal that shareholders failed to keep when they transferred all of the cash out of their utility subsidiaries. If ratepayers are going to take on the entire risk (as keeps coming up) then either authorized return should be set at the corporate bond debt rate or the utilities should just be publicly owned.

The second explanation of why the market imploded was that the decentralization created a lack of coordination in providing enough resources. In this view, the CDWR rescue in 2001 righted the ship, but the exodus of the community choice aggregators (CCAs) again threatens system integrity again. The preferred solution for the CPUC is now to reconcentrate power procurement and management with the IOUs, thus killing the remnants of restructuring and markets.

The problem is that the current construct of the PCIA exit fee similarly leaves the market open to potential manipulation. And we’ve seen how virtually unfettered procurement between 2001 and the emergence of the CCAs resulted in substantial excess costs.

The real lessons from the California energy crisis are two fold:

  • Any stranded asset recovery must be done as a single or fixed payment based on the market value of the assets at the moment of market formation. Any other method leaves market participants open to price manipulation. This lesson should be applied in the case of the exit fees paid by CCAs and customers using distributed energy resources. It is the only way to fairly allocate risks between customers and shareholders.
  • LSEs must be able unencumbered in signing longer term PPAs, but they also should be limited ahead of time in the ability to recover stranded costs so that they have significant incentives to prudently procure resources. California’s utilities still lack this incentive.

Are PG&E’s customers about to walk?

In the 1990s, California’s industrial customers threatened to build their own self-generation plants and leave the utilities entirely. Escalating generation costs due to nuclear plant cost overruns and too-generous qualifying facilities (QF) contracts had driven up rates, and the technology that made QFs possible also allowed large customers to consider self generating. In response California “restructured” its utility sector to introduce competition in the generation segment and to get the utilities out of that part of the business. Unfortunately the initiative failed, in a big way, and we were left with a hybrid system which some blame for rising rates today.

Those rising rates may be introducing another threat to the utilities’ business model, but it may be more existential this time. A previous blog post described how Pacific Gas & Electric’s 2022 Wildfire Mitigation Plan Update combined with the 2023 General Rate Application could lead to a 50% rate increase from 2020 to 2026. For standard rate residential customers, the average rate could by 41.9 cents per kilowatt-hour.

For an average customer that translates to $2,200 per year per kilowatt of peak demand. Using PG&E’s cost of capital, that implies that an independent self-sufficient microgrid costing $15,250 per kilowatt could be funded from avoiding paying PG&E bills.

The National Renewable Energy Laboratory (NREL) study referenced in this blog estimates that a stand alone residential microgrid with 7 kilowatts of solar paired with a 5 kilowatt / 20 kilowatt-hour battery would cost between $35,000 and $40,000. The savings from avoiding PG&E rates could justify spending $75,000 to $105,000 on such a system, so a residential customer could save up to $70,000 by defecting from the grid. Even if NREL has underpriced and undersized this example system, that is a substantial margin.

This time it’s not just a few large customers with choice thermal demands and electricity needs—this would be a large swath of PG&E’s residential customer class. It would be the customers who are most affluent and most able to pay PG&E’s extraordinary costs. If many of these customers view this opportunity to exit favorably, the utility could truly face a death spiral that encourages even more customers to leave. Those who are left behind will demand more relief in some fashion, but those customers who already defected will not be willing to bail out the company.

In this scenario, what is PG&E’s (or Southern California Edison’s and San Diego Gas & Electric’s) exit strategy? Trying to squeeze current NEM customers likely will only accelerate exit, not stifle it. The recent two-day workshop on affordability at the CPUC avoided discussing how utility investors should share in solving this problem, treating their cost streams as inviolable. The more likely solution requires substantial restructuring of PG&E to lower its revenue requirements, including by reducing income to shareholders.

Why utility prices cannot be set using short-run marginal costs

One commentator on the Energy Institute at Haas’ blog entitled “Everyone Should Pay a ‘Solar Tax’” points out that one version of economic theory holds that short run marginal cost is the appropriate metric for composing efficient prices. And he points out that short-run (SRMC) and long-run marginal costs (LRMC) should converge in equilibrium. So he implicitly says that long run marginal costs are the appropriate metric if as a stable long-run measure is based, as he states, on forecasts.

Even so, he misses an important aspect–using the SRMC for pricing relies on important conditions such as (1) relatively free entry and exit, (2) producers bear full risk for their investments, and (3) no requirements exist for minimum supply (i.e., no reserve margins). He points out that utilities overbuild their transmission and distribution (and I’ll point out their generation) systems. I would assert that is because of the market failures related to the fact that the conditions I listed above are missing–entry is restricted or prohibited, customers bear almost all of the risk, and reserve margins largely eliminates any potential for scarcity rents. In fact, California explicitly chose its reserve margin and resource adequacy procurement standards to eliminate the potential for pricing in the scarcity rents necessary for SRMC and LRMC to converge.

He correctly points out that apparent short run MC are quite low (not quite as close to zero as he asserts though)–a statement that implies that he expects that SRMC in a correctly functioning market would be much higher. In fact, as he states, the SRMC should converge to the LRMC. The fact is that SRMC has not risen to the LRMC on an annual average basis in decades in California (briefly in 2006, 2001 and 2000 (when generators exerted market power) and then back to the early 1980s). So why continue to insist that we should be using the current, incorrect SRMC as the benchmark when we know that it is wrong and we specifically know why its wrong? That we have these market failures to maintain system reliability and address the problems of network and monopolistic externalities is why we have regulation.

The solution is not to try to throw out our current regulatory scheme and then let the market price run free in the current institutional structure with a single dominant player. Avoiding market dominance is the raison d’etre for economic regulation. If that is the goal, the necessary first step is introducing and sustaining enough new entrants to be able to discipline the behavior of the dominant firm. Pricing reform must follow that change, not precede it. Competitive firms will not just spontaneously appear due to pricing reform.

It’s not clear that utilities “must” recover their “fixed” investments costs. Another of the needed fixes to the current regulatory scheme to improve efficiency is having utilities bear the risks of making incorrect investment decisions. Having warned (correctly) the IOUs about overforecasting demand growth for more than a dozen years now, they will not listen such analyses unless they have a financial incentive to do so.

Contrary to claims by this and other commentators, It is not efficient to charge customers a fixed charge beyond the service connection cost (which is about $10/month for residential customers for California IOUs). If the utility charges a fixed cost for the some portion of the rest of the grid, the efficient solution must then allow customers to sell their share of that grid to other customers to achieve Pareto optimal allocations among the customers. We could set up a cumbersome, high transaction cost auction or bulletin board to facilitate these trades, but there is at least another market mechanism that is nearly as efficient with much lower transaction costs–the dealer. (The NYSE uses a dealer market structure with market makers acting as dealers.) In the case of the utility grid, the utility that operates the grid also can act as the dealer. The most likely transaction unit would bein kilowatt-hours. So we’re left back where we started with volumetric rates. The problem with this model is not that it isn’t providing sufficient revenue certainty–that’s not an efficiency criterion. The problem is that the producer isn’t bearing enough of the risk of insufficient revenue recovery.

An alternative solution may be to set the distribution volumetric rate at the LRMC with no assurance of revenue requirement on that portion, and then recover the difference between average cost and LRMC in a fixed charge. This is the classic “lump sum” solution to setting monopoly pricing. The issue has been how to allocate those lump sum payments. However, the true distribution LRMC appears to be higher than average costs now based on how average rates have been rising.

Considerations for designing groundwater markets

The California Water Commission staff asked a group of informed stakeholders and experts about “how to shape well-managed groundwater trading programs with appropriate safeguards for communities, ecosystems, and farms.” I submitted the following essay in response to a set of questions.

In general, setting up functioning and fair markets is a more complex process than many proponents envision. Due to the special characteristics of water that make location particularly important, water markets are likely to be even more complex, and this will require more thinking to address in a way that doesn’t stifle the power of markets.

Anticipation of Performance

  1. Market power is a concern in many markets. What opportunities or problems could market power create for overall market performance or for safeguarding? How is it likely to manifest in groundwater trading programs in California?

I was an expert witness on behalf of the California Parties in the FERC Energy Crisis proceeding in 2003 after the collapse of California’s electricity market in 2000-2001. That initial market arrangement failed for several reasons that included both exploitations of traits of internal market functions and limitations on outside transactions that enhanced market power. An important requirement that can mitigate market power is the ability to sign long-term agreements that then reduces the amount of resources that are open to market manipulation. Clear definitions of resource accounting used in transactions is a second important element. And lowering transaction costs and increasing liquidity are a third element. Note that confidentiality has not prevented market gaming in electricity markets.

Groundwater provides a fairly frequent opportunity for exploitation of market power with recurrence of dry and drought conditions. The analogy for electricity is during peak load conditions. Prices in the Texas ERCOT market went up 30,000 fold last February during such a shortage. Droughts in California happen more frequently than freezes in Texas.

The other dimension is that often a GSA has a concentration of a small number of property owners. This small concentration eases the ability to manipulate prices even if buyers and sellers are anonymous. This situation is what led to the crisis in the CAISO market. (I was able beforehand to calculate the minimum generation capacity ownership required to profitably manipulate prices, and it was an amount held by many of the merchant generators in the market.) Those larger owners are also the ones most likely to have the resources to participate in certain types of market designs due to higher transaction costs that act as barriers.

2. Given a configuration of market rules, how well can impacts to communities, the environment, and small farmers be predicted?

The impacts can be fairly well assessed with sufficient modeling with inclusion of three important pieces of information. The first is a completely structured market design that can be tested and modeled. The second is a relatively accurate assessment of the costs of individuals entities to participate in such a market. And the third is modelling the variation in groundwater depth to assess the likelihood of those swings exceeding current well depths for these groups.

Safeguards

3. What rules are needed to safeguard these water users? If not through market mechanisms directly, how could or should these users be protected?

These groups should not participate in shorter term groundwater trading markets such as for annual allocations unless they proactively elect to do so. They are unlikely to have the resources to participate in an usefully informed way. Instead, the GSAs should carve allocations out of the sustainable yields that are then distributed in any number of methods that include bidding for long run allocations as well as direct allowances.

For tenant farmers, restrictions on landlords’ participation in short-term markets should be implemented. This can be specified either through quantity limits, long term contracting requirements or time windows for guaranteed supplies to tenants that match with lease terms.

4. What other kinds of oversight, monitoring, and evaluation of markets are needed to safeguard? Who should perform these functions?

These markets will likely require oversight to prevent market manipulation. Instituting market monitors akin to those who now oversee the CAISO electricity and the CARB GHG Allowance auctions is potential approach. The state would most likely be the appropriate institution to provide this service. The functions for those monitors are well delineated by those other agencies. The single most important requirement for this function is a clear authority and willingness to enforce meaningful actions as a consequence of violations.

5. Groundwater trading programs could impact markets for agricultural commodities, land, labor, or more. To what degree could the safeguards offered by groundwater trading programs be undermined through the programs’ interactions with other markets? How should other markets be considered?

These interactions among different markets are called pecuniary externalities, and economists consider these as intended consequences of using market mechanisms to change behavior and investments across markets. For example, establishing prices for groundwater most likely will change both cropping decisions and irrigation practices, which in turn will impact both equipment and service dealers and labor. Safeguards must be established in ways that do not directly affect these impacts—to do otherwise defeats the very purpose of setting up markets in the first place. People will be required to change from their current practices and choices as a result of instituting these markets.

Mitigation of adverse consequences should account for catastrophic social outcomes to individuals and businesses that are truly outside of their control. SGMA, and associated groundwater markets, are intended to create economic benefits for the larger community. A piece often missing from the social benefit-cost assessment that leads to the adoption of these programs is compensation to those who lose economically from the change. For example, conversion from a labor intensive crop to a less water intensive one could reduce farm labor demand. Those workers should be paid compensation from a public pool of beneficiaries.

6. Should safeguarding take common forms across all of the groundwater trading programs that may form in California? To the degree you think it would help, what level of detail should a common framework specify?

Localities generally do not have either the resources, expertise or sufficient incentives to manage these types of safeguards. Further the safeguards should be relatively uniform across the region to avoid creating inadvertent market manipulation opportunities among different groundwater markets. (That was one of the means of exploiting CAISO market in 2000-01.) The level of detail will depend on other factors that can be identified after potential market structures are developed and a deeper understanding is prepared.

7. Could transactions occurring outside of a basin or sub-basin’s groundwater trading program make it harder to safeguard? If so, what should be done to address this?

The most important consideration is the interconnection with surface water supplies and markets. Varying access to surface water will affect the relative ability to manipulate market supplies and prices. The emergence of the NASDAQ Veles water futures market presents another opportunity to game these markets.

Among the most notorious market manipulation techniques used by Enron during the Energy Crisis was one called “Ricochet” that involved sending a trade out of state and then returning down a different transmission line to create increased “congestion.” Natural gas market prices were also manipulated to impact electricity prices during the period. (Even the SCAQMD RECLAIM market may have been manipulated.) It is possible to imagine a similar series of trades among groundwater and surface water markets. It is not always possible to identify these types of opportunities and prepare mitigation until a full market design is prepared—they are particular to situations and general rules are not easily specified.

Performance Indicators and Adaptive Management

8. Some argue that market rules can be adjusted in response to evidence a market design did not safeguard. What should the rules for changing the rules be?

In general, changing the rules for short term markets, e.g., trading annual allocations, should be relatively easy. Investors should not be allowed to profit from market design flaws no matter how much they have spent. Changes must be carefully considered but they also should not be easily impeded by those who are exploiting those flaws, as was the case in fall of 2000 for California’s electricity market.

California’s water futures market slow to rise as it may not be meeting the real need

I wrote about potential problems with the NASDAQ Veles California Water Index futures market. The market is facing more headwinds as farmers are wary of participating in the cash-only markets that does not deliver physical water.

Their reluctance illustrates a deeper problem with the belief in and advocacy for relying on short-run markets to finance capital intensive industries. The same issue is arising in electricity where a quarter-century experiment has been running on whether hourly energy-only markets can deliver the price signals to maintain reliability and generate clean energy. The problem is making investment decisions and financing those investments rely on a relatively stable stream of costs and revenues. Some of that can be fixed through third-party contracts and other financial instruments but the structures of the short term markets are such that entering or exiting can influence the price and erode profits.

In the case of California Water Index futures market, the pricing fails to recognize an important different between physical and financial settlement of water contracts: water applied this year also keeps crops, particularly permanent ones such as orchards and vineyards, viable for next year and into the future. In other words, physical water delivers multi-year benefits while a financial transaction only addresses this year’s cashflow problem. The farmer still faces the problem of how to get the orchard to the next year.

Whether a financial cash-settlement only futures market will work is still an open question, but farmers are likely looking for a more direct solution to keeping their farming operations viable in the face of greater volatility in water supplies.

Why are real-time electricity retail rates no longer important in California?

The California Public Utilities Commission (CPUC) has been looking at whether and how to apply real-time electricity prices in several utility rate applications. “Real time pricing” involves directly linking the bulk wholesale market price from an exchange such as the California Independent System Operator (CAISO) to the hourly retail price paid by customers. Other charges such as for distribution and public purpose programs are added to this cost to reach the full retail rate. In Texas, many retail customers have their rates tied directly or indirectly to the ERCOT system market that operates in a manner similar to CAISO’s. A number of economists have been pushing for this change as a key solution to managing California’s reliability issues. Unfortunately, the moment may have passed where this can have a meaningful impact.

In California, the bulk power market costs are less than 20% of the total residential rate. Even if we throw in the average capacity prices, it only reaches 25%. In addition, California has a few needle peaks a year compared to the much flatter, longer, more frequent near peak loads in the East due to the differences in humidity. The CAISO market can go years without real price deviations that are consequential on bills. For example, PG&E’s system average rate is almost 24 cents per kilowatt-hour (and residential is even higher). Yet, the average price in the CAISO market has remained at 3 to 4 cents per kilowatt-hour since 2001, and the cost of capacity has actually fallen to about 2 cents. Even a sustained period of high prices such as occurred last August will increase the average price by less than a penny–that’s less than 5% of the total rate. The story in 2005 was different, when this concept was first offered with an average rate of 13 cents per kilowatt-hour (and that was after the 4 cent adder from the energy crisis). In other words, the “variable” component just isn’t important enough to make a real difference.

Ahmad Faruqui who has been a long time advocate for dynamic retail pricing wrote in a LinkedIn comment:

“Airlines, hotels, car rentals, movie theaters, sporting events — all use time-varying rates. Even the simple parking meter has a TOU rate embedded in it.”

It’s true that these prices vary with time, and electricity prices are headed that way if not there already. Yet these industries don’t have prices that change instantly with changes in demand and resource availability–the prices are often set months ahead based on expectations of supply and demand, much as traditional electricity TOU rates are set already. Additionally, in all of these industries , the price variations are substantially less than 100%. But for electricity, when the dynamic price changes are important, they can be up to 1,000%. I doubt any of these industries would use pricing variations that large for practical reasons.

Rather than pointing out that this tool is available and some types of these being used elsewhere, we should be asking why the tool isn’t being used? What’s so different about electricity and are we making the right comparisons?

Instead, we might look at a different package to incorporate customer resources and load dynamism based on what has worked so far.

  • First is to have TOU pricing with predictable patterns. California largely already has this in place, and many customer groups have shown how they respond to this signal. In the Statewide Pilot on critical peak period price, the bulk of the load shifting occurred due to the implementation of a base TOU rate, and the CPP effect was relatively smaller.
  • Second, to enable more distributed energy resources (DER) is to have fixed price contracts akin to generation PPAs. Everyone understands the terms of the contracts then instead of the implicit arrangement of net energy metering (NEM) that is very unsatisfactory for everyone now. It also means that we have to get away from the mistaken belief that short-run prices or marginal costs represent “market value” for electricity assets.
  • Third for managing load we should have robust demand management/response programs that target the truly manageable loads, and we should compensate customers based on the full avoided costs created.

Is the NASDAQ water futures market transparent enough?

Futures markets are settled either physically with actual delivery of the contracted product, or via cash based on the difference in the futures contract price and the actual purchase price. The NASDAQ Veles California Water Index future market is a cash settled market. In this case, the “actual” price is constructed by a consulting firm based on a survey of water transactions. Unfortunately this method may not be full reflective of the true market prices and, as we found in the natural gas markets 20 years ago, these can be easily manipulated.

Most commodity futures markets, such at the crude oil or pork bellies, have a specific delivery point, such as Brent North Sea Crude or West Texas Intermediate at Cushing, Oklahoma or Chicago for some livestock products. There is also an agreed upon set of standards for the commodities such as quality and delivery conditions. The problem with the California Water Index is that these various attributes are opaque or even unknown.

Two decades ago I compiled the most extensive water transfer database to date in the state. I understand the difficulty of collecting this information and properly classifying it. The bottom line is that there is not a simple way to clearly identify what is the “water transfer price” at any given time.

Water supplied for agricultural and urban water uses in California has many different attributes. First is where the water is delivered and how it is conveyed. While water pumped from the Delta gets the most attention, surface water comes from many other sources in the Sacramento and San Joaquin Valleys, as well as from the Colorado River. The cost to move this water greatly varies by location ranging from gravity fed to a 4,000 foot lift over the Tehachapis.

Second is the reliability and timing of availability. California has the most complex set of water rights in the U.S. and most watersheds are oversubscribed. A water with a senior right delivered during the summer is more valuable than a junior right delivered in the winter.

Third is the quality of the water. Urban districts will compete for higher quality sources, and certain agricultural users can use higher salinity sources than others.

A fourth dimension is that water transfers are signed for different periods and delivery conditions as well as other terms that directly impact prices.

All of these factors lead to a spread in prices that are not well represented by a single price “index”. This becomes even more problematic when a single entity such as the Metropolitan Water District enters the market and purchases one type of water which they skews the “average.” Bart Thompson at Stanford has asked whether this index will reflect local variations sufficiently.

Finally, many of these transactions are private deals between public agencies who do not reveal key attributes these transfers, particularly price, because there is not an open market reporting requirement. A subsequent study of the market by the Public Policy Institute of California required explicit cooperation from these agencies and months of research. Whether a “real time” index is feasible in this setting is a key question.

The index managers have not been transparent about how the index is constructed. The delivery points are not identified, nor are the sources. Whether transfers are segmented by water right and term is not listed. Whether certain short term transfers such as the State Water Project Turnback Pool are included is not listed. Without this information, it is difficult to measure the veracity of the reported index, and equally difficult to forecast the direction of the index.

The housing market has many of these same attributes, which is one reason why you can’t buy a house from a central auction house or from a dealer. There are just too many different dimensions to be considered. There is housing futures market, but housing has one key difference from the water transfer market–the price and terms are publicly reported to a government agency (usually a county assessor). Companies such as CoreLogic collect and publish this data (that is distributed by Zillow and Redfin.)

In 2000, natural gas prices into California were summarized in a price index reported by Natural Gas Intelligence. The index was based a phone survey that did not require verification of actual terms. As part of the electricity crisis that broke that summer, gas traders found that they could manipulate gas prices for sales to electricity generators higher by simply misreporting those prices or by making multiple sequential deals that ratcheted up the price. The Federal Energy Regulatory Commission and Commodity Futures Trading Commission were forced to step in and establish standards for price reporting.

The NASDAQ Veles index has many of the same attributes as the gas market had then but perhaps with even less regulatory protections. It is not clear how a federal agency could compel public agencies, including the U.S. Bureau of Reclamation, to report and document prices. Oversight of transactions by water districts is widely dispersed and usually assigned to the local governing board.

Trying to introduce a useful mechanism to this market sounds like an attractive option, but the barriers that have impeded other market innovations may be too much.

ERCOT has the peak period scarcity price too high

The freeze and resulting rolling outages in Texas in February highlighted the unique structure of the power market there. Customers and businesses were left with huge bills that have little to do with actual generation expenses. This is a consequence of the attempt by Texas to fit into an arcane interpretation of an economic principle where generators should be able to recover their investments from sales in just a few hours of the year. Problem is that basic of accounting for those cashflows does not match the true value of the power in those hours.

The Electric Reliability Council of Texas (ERCOT) runs an unusual wholesale electricity market that supposedly relies solely on hourly energy prices to provide the incentives for incenting new generation investment. However, ERCOT is using the same type of administratively-set subsidies to create enough potential revenue to cover investment costs. Further, a closer examination reveals that this price adder is set too high relative to actual consumer value for peak load power. All of this leads to a conclusion relying solely on short-run hourly prices as a proxy for the market value that accrues to new entrants is a misplaced metric.

The total ERCOT market first relies on side payments to cover commitment costs (which creates barriers to entry but that’s a separate issue) and second, it transfers consumer value through to the Operating Reserve Demand Curve (ORDC) that uses a fixed value of lost load (VOLL) in an arbitrary manner to create “opportunity costs” (more on that definition at a later time) so the market can have sufficient scarcity rents. This second price adder is at the core of ERCOT’s incentive system–energy prices alone are insufficient to support new generation investment. Yet ERCOT has ignored basic economics and set this value too high based on both available alternatives to consumers and basic regional budget constraints.

I started with an estimate of the number of hours where prices need the ORDC to be at full VOLL of $9000/MWH to recover the annual revenue requirements of combustion turbine (CT) investment based on the parameters we collected for the California Energy Commission. It turns out to be about 20 to 30 hours per year. Even if the cost in Texas is 30% less, this is still more 15 hours annually, every single year or on average. (That has not been happening in Texas to date.) Note for other independent system operators (ISO) such as the California ISO (CAISO), the price cap is $1,000 to $2,000/MWH.

I then calculated the cost of a customer instead using a home generator to meet load during those hours assuming a life of 10 to 20 years on the generator. That cost should set a cap on the VOLL to residential customers as the opportunity cost for them. The average unit is about $200/kW and an expensive one is about $500/kW. That cost ranges from $3 to $5 per kWh or $3,000 to $5,000/MWH. (If storage becomes more prevalent, this cost will drop significantly.) And that’s for customers who care about periodic outages–most just ride out a distribution system outage of a few hours with no backup. (Of course if I experienced 20 hours a year of outage, I would get a generator too.) This calculation ignores the added value of using the generator for other distribution system outages created by events like a hurricane hitting every few years, as happens in Texas. That drives down this cost even further, making the $9,000/MWH ORDC adder appear even more distorted.

The second calculation I did was to look at the cost of an extended outage. I used the outages during Hurricane Harvey in 2017 as a useful benchmark event. Based on ERCOT and U.S. Energy Information Reports reports, it looks like 1.67 million customers were without power for 4.5 days. Using the Texas gross state product (GSP) of $1.9 trillion as reported by the St. Louis Federal Reserve Bank, I calculated the economic value lost over 4.5 days, assuming a 100% loss, at $1.5 billion. If we assume that the electricity outage is 100% responsible for that loss, the lost economic value per MWH is just under $5,000/MWH. This represents the budget constraint on willingness to pay to avoid an outage. In other words, the Texas economy can’t afford to pay $9,000/MWH.

The recent set of rolling blackouts in Texas provides another opportunity to update this budget constraint calculation in a different circumstance. This can be done by determining the reduction in electricity sales and the decrease in state gross product in the period.

Using two independent methods, I come up with an upper bound of $5,000/MWH, and likely much less. One commentator pointed out that ERCOT would not be able achieve a sufficient planning reserve level at this price, but that statement is based on the premises that short-run hourly prices reflect full market values and will deliver the “optimal” resource mix. Neither is true.

This type of hourly pricing overemphasizes peak load reliability value and undervalues other attributes such as sustainability and resilience. These prices do not reflect the full incremental cost of adding new resources that deliver additional benefits during non-peak periods such as green energy, nor the true opportunity cost that is exercised when a generator is interconnected rather than during later operations. Texas has overbuilt its fossil-fueled generation thanks to this paradigm. It needs an external market based on long-run incremental costs to achieve the necessary environmental goals.

Moving beyond the easy stuff: Mandates or pricing carbon?

figure-1

Meredith Fowlie at the Energy Institute at Haas posted a thought provoking (for economists) blog on whether economists should continue promoting pricing carbon emissions.

I see, however, that this question should be answered in the context of an evolving regulatory and technological process.

Originally, I argued for a broader role for cap & trade in the 2008 CARB AB32 Scoping Plan on behalf of EDF. Since then, I’ve come to believe that a carbon tax is probably preferable over cap & trade when we turn to economy wide strategies for administrative reasons. (California’s CATP is burdensome and loophole ridden.) That said, one of my prime objections at the time to the Scoping Plan was the high expense of mandated measures, and that it left the most expensive tasks to be solved by “the market” without giving the market the opportunity to gain the more efficient reductions.

Fast forward to today, and we face an interesting situation because the cost of renewables and supporting technologies have plummeted. It is possible that within the next five years solar, wind and storage will be less expensive than new fossil generation. (The rest of the nation is benefiting from California initial, if mismanaged, investment.) That makes the effective carbon price negative in the electricity sector. In this situation, I view RPS mandates as correcting a market failure where short term and long term prices do not and cannot converge due to a combination of capital investment requirements and regulatory interventions. The mandates will accelerate the retirement of fossil generation that is not being retired currently due to mispricing in the market. As it is, many areas of the country are on their way to nearly 100% renewable (or GHG-free) by 2040 or earlier.

But this and other mandates to date have not been consumer-facing. Renewables are filtered through the electric utility. Building and vehicle efficiency standards are imposed only on new products and the price changes get lost in all of the other features. Other measures are focused on industry-specific technologies and practices. The direct costs are all well hidden and consumers generally haven’t yet been asked to change their behavior or substantially change what they buy.

But that all would seem to change if we are to take the next step of gaining the much deeper GHG reductions that are required to achieve the more ambitious goals. Consumers will be asked to get out of their gas-fueled cars and choose either EVs or other transportation alternatives. And even more importantly, the heating, cooling, water heating and cooking in the existing building stock will have to be changed out and electrified. (Even the most optimistic forecasts for biogas supplies are only 40% of current fossil gas use.) Consumers will be presented more directly with the costs for those measures. Will they prefer to be told to take specific actions, to receive subsidies in return for higher taxes, or to be given more choice in return for higher direct energy use prices?

Reverse auctions for storage gaining favor

power_auction_xl_721_420_80_s_c1

Two recent reports highlight the benefits of using “reverse auctions”. In a reverse auction, the buyer specifies a quantity to be purchased, and sellers bid to provide a portion of that quantity.  An article in Utility Dive summarizes some of the experiences with renewable market auctions.  A separate report in the Review of Environmental Economics and Policy goes further to lay out five guidelines:

  1. Encourage a Large Number of Auction Participants
  2. Limit the Amount of Auctioned Capacity
  3. Leverage Policy Frameworks and Market Structures
  4. Earmark a Portion of Auctioned Capacity for Less-mature Technologies
  5. Balance Penalizing Delivery Failures and Fostering Competition

This policy prescription requires well-informed policy makers balancing different factors–not a task that is well suited to a state legislature. How to develop such a coherent policy can done in two ways. The first is to let the a state commission work through a proceeding to set an overall target and structure. But perhaps a more fruitful approach would be to let local utilities, such as California’s community choice aggregators (CCAs) to set up individual auctions, maybe even setting their own storage targets and then experimenting with different approaches.

California has repeatedly made errors by overly relying on centralized market structures that overcommit or mismatch resource acquisition. This arises because a mistake by a single central buyer is multiplied across all load while a mistake by one buyer within a decentralized market is largely isolated to the load of that one buyer. Without perfect foresight and a distinct lack of mechanisms to appropriately share risk between buyers and sellers, we should be designing an electricity market that mitigates risks to consumers rather than trying to achieve a mythological “optimal” result.