Two recent reports highlight the benefits of using “reverse auctions”. In a reverse auction, the buyer specifies a quantity to be purchased, and sellers bid to provide a portion of that quantity. An article in Utility Dive summarizes some of the experiences with renewable market auctions. A separate report in the Review of Environmental Economics and Policy goes further to lay out five guidelines:
Encourage a Large Number of Auction Participants
Limit the Amount of Auctioned Capacity
Leverage Policy Frameworks and Market Structures
Earmark a Portion of Auctioned Capacity for Less-mature Technologies
Balance Penalizing Delivery Failures and Fostering Competition
This policy prescription requires well-informed policy makers balancing different factors–not a task that is well suited to a state legislature. How to develop such a coherent policy can done in two ways. The first is to let the a state commission work through a proceeding to set an overall target and structure. But perhaps a more fruitful approach would be to let local utilities, such as California’s community choice aggregators (CCAs) to set up individual auctions, maybe even setting their own storage targets and then experimenting with different approaches.
California has repeatedly made errors by overly relying on centralized market structures that overcommit or mismatch resource acquisition. This arises because a mistake by a single central buyer is multiplied across all load while a mistake by one buyer within a decentralized market is largely isolated to the load of that one buyer. Without perfect foresight and a distinct lack of mechanisms to appropriately share risk between buyers and sellers, we should be designing an electricity market that mitigates risks to consumers rather than trying to achieve a mythological “optimal” result.
As I listen to the opening of the joint California Customer Choice En Banc held by the CPUC and CEC, I hear Commissioners and speakers claiming that community choice aggregators (CCAs) are taking advantage of the current market and shirking their responsibilities for developing a responsible, resilient resource portfolio.
The CPUC’s view has two problems. The first is an unreasonable expectation that CCAs can start immediately as a full-grown organization with a complete procurement organization, and more importantly, a rock solid credit history. The second is how the CPUC has ignored the fact that the CCAs have already surpassed the state’s RPS targets in most cases and that they have significant shares of long-term power purchase agreements (PPAs).
State law in fact penalizes excess procurement of RPS-eligible power by requiring that 65% of that specific portfolio be locked into long-term PPAs, regardless of the prudency of that policy. PG&E has already demonstrated that they have been unable to prudently manage its long-term portfolio, incurring a 3.3 cents per kilowatt-hour risk hedge premium on its RPS portfolio. (Admittedly, that provision could be interpreted to be 65% of the RPS target, e.g., 21.5% of a portfolio that has met the 33% RPS target, but that is not clear from the statute.)
In its annual report on resource adequacy (RA) transactions, the CPUC reports the wrong result for the market price to be used for valuing capacity from the RA market data. The Commission’s decision issued in the PCIA rulemaking on establishing the CCA’s “exit fee” uses this value in error. In the CAISO energy and ancillary services markets, the market clearing price used to set the value of the energy portfolio is determined by the highest accepted bid in a single hour, and then averaged across all hours. In contrast, the average reported RA price in The 2017 Resource Adequacy Report incorrectly reports the average of all transactions. This would be equivalent to the CAISO reporting the average of all accepted bids, including those at zero or even negative, as the market clearing price.
The appropriate RA price metric is the highest RA transaction price for each month. This price represents the market equilibrium point at which a consumer is willing to pay the highest price given how low a price a supplier is willing to provide that quantity of the resource. (The other transactions are called “inframarginal” and such transactions are common in many markets.) In a full auction market, all transactions would clear at this single price, which is why the CAISO reports a single market clearing price for all transactions in a single hour. That should also be the case for the RA market price, except the time unit is a month.
Due to a lack of an auction for the moment, it is possible to manipulate the highest apparent price through a bilateral transaction. Instead, the Commission could choose a price near the highest point, but with sufficient market depth to mitigate potential manipulation. Using the 90th percentile transaction is one metric commonly used based on a quick survey of market price reports.
I like this taxonomy of what type of regulatory/liability framework to use in which situation posted in Environmental Economics. (Reminds me of a market-type structure I created for my 1996 paper on environmental commodity markets.) However, I think the two choices on the right side could be changed:
Lower right corner to “incentive-based regulation”: The damages are clear and can be valued, but engaging in market transactions is costly. For example, energy efficiency has a clear value with significant spill over benefits, but the costs of gaining information about net gains is costly for individuals. So setting an incentive standard for manufacturers or in energy rates is more cost effective.
Upper right corner to “command and control regulation”: The damages are known and significant, but quantifying them economically, or even physically, is difficult. There are no opportunities for market transactions, but society wants to act. In this case, the regulators would set bounds on behavior or performance.
The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.
Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.
Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.
Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.
We’re now in the midst of the “third wave” of electricity industry reform in California. The first was in the early 1980s with the rise of independently-owned cogeneration and renewable resources. Mixed with increased energy efficiency, that led to a surplus of power in the late 1990s, which in turn created the push for restructuring and deregulation. Unfortunately, poorly designed markets and other factors precipitated the 2000-01 energy crisis. The rise of renewables and distributed resources is pushing a third wave that may change the industry even more fundamentally.
I wrote a paper in 2002 on how I viewed the history of California’s electricity industry through 2001 and presented this at a conference. (It hasn’t yet been published.) I identify some different factors for why the energy crisis erupted, and what lessons we might learn for this next wave.
EDF posted a blog about the resuscitation of U.S. fisheries and how two-thirds of those fisheries are now sustainable thanks to changes in management practices. At the core of those programs are market-based incentives with individual transferable quotas (ITQ). Fishermen are allocated a certain amount of catch within a season and they can trade those quotas among themselves. The overall cap maintains the sustainability of the fishery while individual fishermen can catch an amount that best meets their own objectives and constraints.