A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.
But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.
Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.
The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.
We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.
Two articles with contrasting views of the future showed up in Utility Dive this week. The first was an opinion piece by an MIT professor referencing a study he coauthored comparing the costs of an electricity network where renewables supply more than 40% of generation compared to using advanced nuclear power. However, the report’s analysis relied on two key assumptions:
Current battery storage costs are about $300/kW-hr and will remain static into the future.
Current nuclear technology costs about $76 per MWh and advanced nuclear technology can achieve costs of $50 per MWh.
The second article immediately refuted the first assumption in the MIT study. A report from BloombergNEF found that average battery storage prices fell to $156/kW-hr in 2019, and projected further decreases to $100/kW-hr by 2024.
The reason that this price drop is so important is that, as the MIT study pointed out, renewables will be producing excess power at certain times and underproducing during other peak periods. MIT assumes that system operators will have to curtail renewable generation during low load periods and run gas plants to fill in at the peaks. (MIT pointed to California curtailing about 190 GWh in April. However, that added only 0.1% to the CAISO’s total generation cost.) But if storage is so cheap, along with inexpensive solar and wind, additional renewable capacity can be built to store power for the early evening peaks. This could enable us to free ourselves from having to plan for system peak periods and focus largely on energy production.
MIT’s second assumption is not validated by recent experience. As I posted earlier, the about to be completed Vogtle nuclear plant will cost ratepayers in Georgia and South Carolina about $100 per MWh–more than 30% more than the assumption used by MIT. PG&E withdrew its relicensing request for Diablo Canyon because the utility projected the cost to be $100 to $120 per MWh. Another recent study found nuclear costs worldwide exceeded $100/MWh and it takes an average of a decade finish a plant.
Another group at MIT issued a report earlier intended to revive interest in using nuclear power. I’m not sure of why MIT is so focused on this issue and continuing to rely on data and projections that are clearly outdated or wrong, but it does have one of the leading departments in nuclear science and engineering. It’s sad to see that such a prestigious institution is allowing its economic self interest to cloud its vision of the future.
What do you see in the future of relying on renewables? Is it economically feasible to build excess renewable capacity that can supply enough storage to run the system the rest of the day? How would the costs of this system compare to nuclear power at actual current costs? Will advanced nuclear power drop costs by 50%? Let us know your thoughts and add any useful references.
That article has several errors and is misleading in others aspects. First, California’s electricity rates are high because of the renewable contracts signed nearly a decade ago when renewables were just evolving and much higher cost. California’s investment was part of the reason that solar and wind costs are now lower than existing coals plants (new study shows 75% of coal plants are uneconomic) and competitive with natural gas. Batteries that increase renewable operations have almost become cost effective. It also claims that reliability has “gone down” when in fact we still have a large reserve margin. The California Independent System Operator in fact found a 23% reserve margin when the target is only 17%. We also have the ability to install batteries quickly to solve that issue. PG&E is installing over 500 MW of batteries right now to replace a large natural gas plant.
For the rest of the U.S., consumers will benefit from these lower costs today. Californians have paid too much for their power to date, due to mismanagement by PG&E and the other utilities, but elsewhere will be able to avoid these foibles.
Two recent reports highlight the benefits of using “reverse auctions”. In a reverse auction, the buyer specifies a quantity to be purchased, and sellers bid to provide a portion of that quantity. An article in Utility Dive summarizes some of the experiences with renewable market auctions. A separate report in the Review of Environmental Economics and Policy goes further to lay out five guidelines:
Encourage a Large Number of Auction Participants
Limit the Amount of Auctioned Capacity
Leverage Policy Frameworks and Market Structures
Earmark a Portion of Auctioned Capacity for Less-mature Technologies
Balance Penalizing Delivery Failures and Fostering Competition
This policy prescription requires well-informed policy makers balancing different factors–not a task that is well suited to a state legislature. How to develop such a coherent policy can done in two ways. The first is to let the a state commission work through a proceeding to set an overall target and structure. But perhaps a more fruitful approach would be to let local utilities, such as California’s community choice aggregators (CCAs) to set up individual auctions, maybe even setting their own storage targets and then experimenting with different approaches.
California has repeatedly made errors by overly relying on centralized market structures that overcommit or mismatch resource acquisition. This arises because a mistake by a single central buyer is multiplied across all load while a mistake by one buyer within a decentralized market is largely isolated to the load of that one buyer. Without perfect foresight and a distinct lack of mechanisms to appropriately share risk between buyers and sellers, we should be designing an electricity market that mitigates risks to consumers rather than trying to achieve a mythological “optimal” result.
This post accepts too easily the conventional industry “wisdom” that the only valid price signals come from short term responses and effects. In general, storage and demand response is likely to lead to increased renewables investment even if in the short run GHG emissions increase. This post hints at that possibility, but it doesn’t make this point explicitly. (The only exception might be increased viability of baseloaded coal plants in the East, but even then I think that the lower cost of renewables is displacing retiring coal.)
We have two facts about the electric grid system that undermine the validity of short-term electricity market functionality and pricing. First, regulatory imperatives to guarantee system reliability causes new capacity to be built prior to any evidence of capacity or energy shortages in the ISO balancing markets. Second, fossil fueled generation is no longer the incremental new resource in much of the U.S. electricity grid. While the ISO energy markets still rely on fossil fueled generation as the “marginal” bidder, these markets are in fact just transmission balancing markets and not sources for meeting new incremental loads. Most of that incremental load is now being met by renewables with near zero operational costs. Those resources do not directly set the short-term prices. Combined with first shortcoming, the total short term price is substantially below the true marginal costs of new resources.
Storage policy and pricing should be set using long-term values and emission changes based on expected resource additions, not on tomorrow’s energy imbalance market price.
The findings are that new policy models and cost-cutting technologies would help nuclear play vital role in climate solutions. Progress in reducing carbon emissions requires a broad range of actions to effectively leverage nuclear energy.
However, nothing in the summary reveals the paradigm-shattering innovation that will be required to make nuclear power competitive with a diverse fleet of renewables plus storage that would achieve the same goals. The cost of a solar plant plus storage with today’s technology still costs less than a current technology nuclear plant. That alternative fleet would also provide better reliability by diversifying the generation sources through smaller plants and avoid any radiation contamination risk.
The nuclear industry must clearly demonstrate that it can get past the many hurdles that led to the recent cancellation of two projects in the southeast U.S. Reviving nuclear power will require more than fantasies about what might be.
LADWP is proposing to spend $3 billion on a pumped storage facility at the Hoover Dam on the Colorado River. Yet, LADWP has not been using extensively its aging 1,247 MW Castaic pumped storage plant on the State Water Project in the pumping recovery mode. Instead, LADWP runs it more like a standard hydropower plant, and uses pumping to supplement and extend the peak power generation, rather than using it to store excess day time power. And the SWP’s 759 MW pumped storage plant at the Hyatt-Thermalito powerhouse at Lake Oroville has been not been used effectively for decades.
The more prudent course would seem to be to focus on refurbishing and updating existing facilities, with variable speed pumps for example, to deliver utility scale storage that can capture excess renewable energy generation nearer large load centers. The State Water Contractors should be incented to upgrade these facilities through contracts with the state’s electric utilities. Unfortunately, no direct market mechanism exists to provide a true value for these resources so long at the California Public Utilities Commission and the California Independent System Operator avoid developing full pricing. As it stands, the current pricing scheme socializes and subsidizes a number of electricity services such as transmission, unit commitment decisions, and reliability services.
“A Rochester Institute of Technology study says a customer must face high electricity bills and unfavorable net metering or feed-in policies for grid defection to work.”
Yet…this study used current battery costs (at $350/KW-Hr), ignoring probably cost decreases, and then made more restrictive assumptions about how such a system might work. It’s not clear if “defection” meant complete self sufficiency, or reducing the generation portion (which in California about half of electricity bill.) Regardless, the study shows that grid defection is cost-effective in Hawaii, confirm the RMI findings. Even so, RMI said it would take at least 10 years before such defection was cost-effective in even the high-cost states like New York and California.
A more interesting study would be to look at the “break-even” cost thresholds for solar panels and batteries to make these competitive with utility service. Then planners and decision makers could assess the likelihood of reaching those levels within a range of time periods.
Just hooking up EV owners and not compensating them for the storage services they can provide won’t be a successful or popular idea. Rather the first step is to figure out what is the value of that storage? A new NREL study estimates that value to be about $59 per kW-year with a 33% RPS portfolio in California, increasing to $109/kW-year at a 40% RPS. For a typical EV, that could translate into $300 to $550 per year or $2,000 to $5,000 over 10 years.
Then you assess what are the incremental costs to the EV owner in reduced battery life. Note that batteries depleted 30% can’t be used for EVs any more but are still valuable for grid storage. Vendors probably can build in-home racks that store and connect the depleted batteries. Those become factors in determining the payments to the EV owners and their agents.
As for enrolling EV owners in a storage management program, it need not be cumbersome if enrollment is the default (opt-out) when buying a car or installing a charging station. (See all of the literature on the importance of opt-out vs. opt-in and status quo bias.) The auto dealer or charging administrator becomes the agent. An EV buyer might sign up for the program and not even know it. The charging process could work much like the massive distributed computing projects that harness small parts of the idle processors across millions of personal computers. All of this becomes part of the peer-to-peer transactive energy (TE) grid.