The California Legislature is considering a bill (AB 893) that would require the state’s regulated utilities (including CCAs as well as investor-owned) to buy at least 4,250 megawatts of renewables before federal tax credits expire in 2022.
Unfortunately, this will not create the cost savings that seem so obvious. This argument was made by the renewable energy plant owners in the Diablo Canyon Power Plant retirement case (A.16-08-006) and rejected by the CPUC in its decision. While the tax credits lower current costs, these are more than offset by waiting for technology costs to fall even further, as shown by the solar power forecast above. Combined with the time value of money (discounting), the value of waiting far outweighs prematurely buying renewables.
And there’s a further problem–with a large number of customers moving from the IOUs to CCAs across all three utilities, the question is “who should be responsible for buying this power?” The CCAs will have their own preferences (often locally and community-scale) that will conflict with any choices made by the IOUs. The CCAs are already saddled with poor procurement and portfolio management decisions by the IOUs through exit fees. (PG&E has an embedded risk premium of $33 per megawatt-hour in its RPS portfolio costs.) Why would we want the IOUs to continue to mismanage our power resources?
The California Legislature is still struggling with whether and how it should protect PG&E from a $17 billion liability from the Sonoma wildfires that could push the utility into bankruptcy. The latest proposal would have the CPUC conduct a “stress test” on PG&E’s finances if it faced a large liability, and then PG&E could raise rates sufficiently to cover the difference between the total liability and exposure deemed sufficient to maintain financial solvency. We don’t have enough details to understand how well the stress threshold is defined and how it would differ from the current cost of capital evaluations, but this is a bad idea regardless.
Firms need the threat of bankruptcy to perform efficiently and effectively. We’ve already seen how PG&E manages and performs sloppily, whether its maintaining vegetation (which has been a problem since the early 1990s), tracking its pipeline maintenance (which led to the San Bruno accident), or managing risk in its renewable power portfolio (which has added a $33 per megawatt-hour premium to its cost.) Clearly CPUC oversight alone is not doing the job. Outside litigation may be the only way to get PG&E’s attention, especially if it creates an existential threat.
Policymakers have taken the wrong lesson from PG&E’s previous bankruptcy, filed in 2001 during the California energy crisis. The issue there that lead to the final resolution was whether PG&E was required to provide power to its customers at whatever cost. This situation is not about PG&E’s obligations but rather about its management practices, and a bankruptcy court is much less likely to require a cost pass through.
Instead, the state could simply step in buy PG&E for $1 if the utility declares bankruptcy (an option that Governor Gray Davis was too much of a coward to consider in March 2001.) The state could then directly manage the utility, or better yet, parse it down to eight or ten smaller utilities. (Two studies in PG&E’s 1999 General Rate Case, and the subsequent decision, found that the most efficient utility size is about 500,000 customers. PG&E now has over four million.) Customers would find the utilities more accessible and responsive, and by creating municipal utilities, rates could be much lower with cheaper financing cost. It’s time to rethink where we should head.
Severin Borenstein at UC Berkeley argues against the “try everything” approach to searching for solutions to mitigating greenhouse gas emissions. But he is confusing situations with relatively small incremental consequences (even the California WaterFix is “small” compared to potential climate change impacts.)
Instead, when facing a potentially large catastrophic outcome for which the probability distribution is completely unknown, we need a different analytic approach than a simple cost-benefit analysis based on an “expected” outcome.
Rob Lempert at Rand Corp writes about “robust decisionmaking” under “deep” uncertainty which best fits the situation.
We need to be looking for what decision pathways lead us to the situations create the most vulnerability, not for which one has the “optimal outcome.” Policymakers and stakeholders looking desperately for any solution intuitively get the notion of robust decisionmaking, but are not receiving much guidance about how to best pursue this alternative approach. Economists need to lead the conversation that changes the current misleading perspective.
Electricity customers in Davis and Yolo County are in the midst of choosing between the current incumbent electricity utility Pacific Gas & Electric (PG&E) and the new community choice aggregator (CCA) Valley Clean Energy Alliance (VCE). VCE is a joint powers authority (JPA) of the governments of the Yolo County, and the Cities of Davis and Woodland. (The Cities of Winters and West Sacramento have expressed interest in joining VCE as well.) By state law, customers are initially defaulted to the CCA at the outset before being given multiple chances over a six month period to choose to stay with the incumbent investor-owned utility–PG&E in this case.
Bob Dunning in his Davis Enterprise column August 8 confuses a lack of choice with just changing the starting point of the choice. Regardless of whether VCE or PG&E is the default provider, local customers still have exactly the same choice. But by having VCE start as the default provider, we level the playing field with the long-time giant monopoly utility, PG&E. (And customers can return to PG&E after 12 months if they are dissatisfied.) Why should we continue to give the big guy a continued advantage at the outset?
LADWP is proposing to spend $3 billion on a pumped storage facility at the Hoover Dam on the Colorado River. Yet, LADWP has not been using extensively its aging 1,247 MW Castaic pumped storage plant on the State Water Project in the pumping recovery mode. Instead, LADWP runs it more like a standard hydropower plant, and uses pumping to supplement and extend the peak power generation, rather than using it to store excess day time power. And the SWP’s 759 MW pumped storage plant at the Hyatt-Thermalito powerhouse at Lake Oroville has been not been used effectively for decades.
The more prudent course would seem to be to focus on refurbishing and updating existing facilities, with variable speed pumps for example, to deliver utility scale storage that can capture excess renewable energy generation nearer large load centers. The State Water Contractors should be incented to upgrade these facilities through contracts with the state’s electric utilities. Unfortunately, no direct market mechanism exists to provide a true value for these resources so long at the California Public Utilities Commission and the California Independent System Operator avoid developing full pricing. As it stands, the current pricing scheme socializes and subsidizes a number of electricity services such as transmission, unit commitment decisions, and reliability services.
The California Public Utilities Commission (CPUC) held a two-day workshop on rate design principles for commercial and industrial customers. To the the extent possible, rates are designed in California to reflect the temporal changes in underlying costs–the “marginal costs” of power production and delivery.
Professor Severin Borenstein’s opening presentation doesn’t discuss a very important aspect of marginal costs that we have too long ignored in rate making. That’s the issue of “putty/clay” differences. This is an issue of temporal consistency in marginal cost calculation. The “putty” costs are those short term costs of operating the existing infrastructure. The “clay” costs are those of adding infrastructure which are longer term costs. Sometimes the operational costs can be substitutes for infrastructure. However we are now adding infrastructure (clay) in renewables have have negligible operating (putty) costs. The issue we now face is how to transition from focusing on putty to clay costs as the appropriate marginal cost signals.
Another issue raised by Doug Ledbetter of Opterra is that customers require certainty as well as expected returns to invest in energy-saving projects. We can have certainty for customers if the utilities vintage/grandfather rates and/or structures at the time they make the investment. Then rates / structures for other customers can vary and reflect the benefits that were created by those customers making investments.
Jamie Fine of EDF emphasized that rate design needs to focus on what is actionable by customers more so than on a best reflection of underlying costs. As an intervenor group representative, we are constantly having this discussion with utilities. Often when we make a suggestion about easing customer acceptance, they say “we didn’t think of that,” but then just move along with their original plan. The rise of DERs and CCAs are in part a response to that tone-deaf approach by the incumbent utilities.
Westinghouse has a long history, rivaling General Electric for decades. The two nuclear plants its constructing are overbudget and behind schedule (which was already nearly a decade to completion.) Hard to believe any firm will want to take on these risks in the future.
Reports: Nuclear firm Westinghouse Electric to file for bankruptcy next week | Utility Dive
Utilities are already preparing for potential fallout if the engineering firm overseeing construction of the Vogtle and VC Summer nuclear units goes under.