As I listen to the opening of the joint California Customer Choice En Banc held by the CPUC and CEC, I hear Commissioners and speakers claiming that community choice aggregators (CCAs) are taking advantage of the current market and shirking their responsibilities for developing a responsible, resilient resource portfolio.
The CPUC’s view has two problems. The first is an unreasonable expectation that CCAs can start immediately as a full-grown organization with a complete procurement organization, and more importantly, a rock solid credit history. The second is how the CPUC has ignored the fact that the CCAs have already surpassed the state’s RPS targets in most cases and that they have significant shares of long-term power purchase agreements (PPAs).
State law in fact penalizes excess procurement of RPS-eligible power by requiring that 65% of that specific portfolio be locked into long-term PPAs, regardless of the prudency of that policy. PG&E has already demonstrated that they have been unable to prudently manage its long-term portfolio, incurring a 3.3 cents per kilowatt-hour risk hedge premium on its RPS portfolio. (Admittedly, that provision could be interpreted to be 65% of the RPS target, e.g., 21.5% of a portfolio that has met the 33% RPS target, but that is not clear from the statute.)
In its annual report on resource adequacy (RA) transactions, the CPUC reports the wrong result for the market price to be used for valuing capacity from the RA market data. The Commission’s decision issued in the PCIA rulemaking on establishing the CCA’s “exit fee” uses this value in error. In the CAISO energy and ancillary services markets, the market clearing price used to set the value of the energy portfolio is determined by the highest accepted bid in a single hour, and then averaged across all hours. In contrast, the average reported RA price in The 2017 Resource Adequacy Report incorrectly reports the average of all transactions. This would be equivalent to the CAISO reporting the average of all accepted bids, including those at zero or even negative, as the market clearing price.
The appropriate RA price metric is the highest RA transaction price for each month. This price represents the market equilibrium point at which a consumer is willing to pay the highest price given how low a price a supplier is willing to provide that quantity of the resource. (The other transactions are called “inframarginal” and such transactions are common in many markets.) In a full auction market, all transactions would clear at this single price, which is why the CAISO reports a single market clearing price for all transactions in a single hour. That should also be the case for the RA market price, except the time unit is a month.
Due to a lack of an auction for the moment, it is possible to manipulate the highest apparent price through a bilateral transaction. Instead, the Commission could choose a price near the highest point, but with sufficient market depth to mitigate potential manipulation. Using the 90th percentile transaction is one metric commonly used based on a quick survey of market price reports.
I like this taxonomy of what type of regulatory/liability framework to use in which situation posted in Environmental Economics. (Reminds me of a market-type structure I created for my 1996 paper on environmental commodity markets.) However, I think the two choices on the right side could be changed:
Lower right corner to “incentive-based regulation”: The damages are clear and can be valued, but engaging in market transactions is costly. For example, energy efficiency has a clear value with significant spill over benefits, but the costs of gaining information about net gains is costly for individuals. So setting an incentive standard for manufacturers or in energy rates is more cost effective.
Upper right corner to “command and control regulation”: The damages are known and significant, but quantifying them economically, or even physically, is difficult. There are no opportunities for market transactions, but society wants to act. In this case, the regulators would set bounds on behavior or performance.
The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.
Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.
Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.
Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.
We’re now in the midst of the “third wave” of electricity industry reform in California. The first was in the early 1980s with the rise of independently-owned cogeneration and renewable resources. Mixed with increased energy efficiency, that led to a surplus of power in the late 1990s, which in turn created the push for restructuring and deregulation. Unfortunately, poorly designed markets and other factors precipitated the 2000-01 energy crisis. The rise of renewables and distributed resources is pushing a third wave that may change the industry even more fundamentally.
I wrote a paper in 2002 on how I viewed the history of California’s electricity industry through 2001 and presented this at a conference. (It hasn’t yet been published.) I identify some different factors for why the energy crisis erupted, and what lessons we might learn for this next wave.
EDF posted a blog about the resuscitation of U.S. fisheries and how two-thirds of those fisheries are now sustainable thanks to changes in management practices. At the core of those programs are market-based incentives with individual transferable quotas (ITQ). Fishermen are allocated a certain amount of catch within a season and they can trade those quotas among themselves. The overall cap maintains the sustainability of the fishery while individual fishermen can catch an amount that best meets their own objectives and constraints.
Bob Sussman at Brookings writes favorably about the resurrection of cap and trade for GHG regulation as a viable policy option with the Chinese planning to implement a program and the US EPA Clean Power Plan encouraging market trading mechanisms in two forms of compliance. Yet as I read this (and also think about proposals to increase water trading to solve California’s ongoing drought), I can see an important missing element in these discussions–how can these markets be designed to gain success?
In 1996, I wrote “Environmental Commodities Markets: ‘Messy’ Versus ‘Ideal’ Worlds” that explored the issues of market design and political realities. As I’ve written recently, we are not always good at fully compensating the losers in environmental policy making, and these groups tend to oppose policies that are beneficial for society as a result. And market incentive proponents seem to always propose some variation on one of two market designs: 1) everyone for themselves in searching for and settling transactions or 2) a giant periodic auction.
In reality, carefully designing market institutions that work for participants is key to the success of those markets. Daniel Bromley wrote about how just “declaring markets” in Russia and Eastern Europe did not instantly transform those economies, much to our chagrin. The RECLAIM emissions market has woefully underperformed because SCAQMD didn’t think through how transactions could be facilitated (and that failure prompted my article.) Frank Wolak and Jonathan Kolstad confirmed my own FERC testimony that the disfunction of the RECLAIM market led to higher electricity prices in the California crisis of 2000-01.
For a presentation a few years ago, I prepared this typology of market structure that looks at the search and match mechanisms and the price revelation and settlement mechanisms. This presentation focused on water transfer markets in California, but it’s also applicable to emission markets. Markets range from brokered/negotiated real estate to dealer/posted-price groceries. Even the New York Stock Exchange, which is a dealer/auction probably works differently than how most people think. There are differences in efficiency and ease of use, often trading off. As we move forward, we need more discussion about these nuts and bolts issues if we want truly successful outcomes.