Tag Archives: California

How to choose a water system model

The California Water & Environmental Modeling Forum (CWEMF) has proposed to update its water modeling protocol guidance, last issued in 2000. This modeling protocol applies to many other settings, including electricity production and planning (which I am familiar with). I led the review of electricity system simulation models for the California Energy Commission, and asked many of these questions then.

Questions that should be addressed in water system modeling include:

  • Models can be used for either short-term operational or long term planning purposes—models rarely can serve both masters. The model should be chosen for its analytic focus is on predicting with accuracy and/or precision a particular outcome (usually for short term operations) or identifying resilience and sustainability.
  • There can be a trade off between accuracy and precision. And focusing overly so on precision in one aspect of a model is unlikely to improve the overall accuracy of the model due to the lack of precision elsewhere. In addition, increased precision also increases processing time, thus slowing output and flexibility.
  • A model should be able to produce multiple outcomes quickly as a “scenario generator” for analyzing uncertainty, risk and vulnerability. The model should be tested for accuracy when relaxing key constraints that increase processing time. For example, in an electricity production model, relaxing the unit commitment algorithm increased processing speed twelve fold while losing only 7 percent in accuracy, mostly in the extreme tail cases.
  • Water models should be able to use different water condition sequences rather than relying on historic traces. In the latter case, models may operate as though the future is known with certainty.
  • Water management models should include the full set of opportunity costs for water supply, power generation, flood protection and groundwater pumping. This implies that some type of linkage should exist between these types of models.

Public takeover of PG&E isn’t going to solve every problem

This article in the Los Angeles Times about what a public takeover of PG&E appears to take on uses the premise that such a step would lead to lower costs, more efficiencies and reduced wildfire risks. These expectations have never been realistic, and shouldn’t be the motivation for such an action. Instead, a public takeover would offer these benefits and opportunities:

  • While the direct costs of constructing and repairing the grid would likely be about the same (and PG&E has some of the highest labor costs around), the cost to borrow and invest the needed funds would be as much as 30% less. That’s because PG&E weighted average cost of capital (debt and shareholder equity) is around 8% per annum while muncipal debt is 5% or less.
  • Ratepayers are already repaying shareholders and creditors for their investments in the utility system. Buying PG&E’s system would simply be replacing those payments with payments to creditors that hold public bonds. Similar to the cost of fixing the grid, this purchase should reduce the annual cost to repay that debt by 30%.
  • And along these lines, utility shareholders have borne little of the costs from these types of risks. Shareholders supposedly get a premium on their investment returns for these “risks” but when asked for examples of large scale disallowances, none of the utilities could provide significant examples. If ratepayers are already bearing all of those risks, then they should get all of the investment benefits as well.
  • Direct public oversight will eliminate a layer of regulation that PG&E has used to impede effective oversight and deflect responsibility. To some extent regulation by the California Public Utilities Commission has been like pushing on a string, with PG&E doing what it wants by “interpreting” CPUC decisions. The result has been a series of missteps by the utility over many decades.
  • A new utility structure may provide an opportunity to renegotiate a number of overly lucrative renewable power purchase agreements that PG&E signed between 2010 and 2015. PG&E failed to properly manage the risk profile of its portfolio because under state law it could pass through all costs of those PPAs once approved by the CPUC. PG&E’s shareholders bore no risk, so why consider that risk? There are several possible options to addressing this issue, but PG&E has little incentive to act.
  • A publicly-owned utility can work more closely with local governments to facilitate the evolution of the energy system to meet climate change challenges. As a private entity with restrictions on how it can participate in customer-side energy management, PG&E cannot work hand-in-glove with cities and counties on building and transportation transformation. PG&E right now has strong incentives to prevent further defections away from its grid; public utilities are more likely to accept these defections with the possibility that the stranded asset costs will be socialized.

The risks of wildfire damages and liabilities are unlikely to change substantially (except if the last point accelerates distributed energy resource investment). But the other benefits and opportunities are likely to make these costs lower.

Davis Should Set Its Utility Reserve Targets with a Transparent and Rigorous Method

The City of Davis Utilities Commission is considering on February 19 whether to disregard the preliminary recommendations of the Commission’s Enterprise Fund Reserve Policies subcommittee to establish a transparent, relatively rigorous and consistent method for setting City reserves. The Staff Report, written by the now-departed finance director, ignored the stated objectives of both the Utilities and Finance and Budget Commissions to develop a consistent set of policies that did not rely on the undocumented and opaque practices of other communities. Those practices had no linkage whatsoever to risk assessment, and the American Water Works Association’s report that the Staff relied on again to reject the Commission’s recommendation again fails to provide any documentation on how the proposed targets reflect risk mitigation—they are simply drawn from past practices.[1]

The City’s Finance & Budget Committee raised the question of whether the City held too much in reserves over five years ago, and the Utilities Commission agreed in 2017 to evaluate the status of the reserves for the four City enterprise funds—water, sanitation/waste disposal, sewer/wastewater, and stormwater. A Utilities Commission subcommittee reviewed the current reserve policies and what is being done by other cities. (I was on that subcommittee.) First, the subcommittee found that the City was using different methods for each fund, and that other cities had not conducted risk analyses to set their targets either. The subcommittee conducted a statistical analysis that allows the City to adjust its reserve targets for changing conditions rather than just relying on the heuristic values provided by consultants.

The subcommittee’s proposal adopted initially by the Utilities Commission achieved three objectives that had been missing from the previous informal reserves policy. Two of these would still be missing under the Staff’s proposal:

  1. Clearly defining and documenting the reserves held for debt coverage. While these amounts were shown in previous rate studies, the documented source of those amounts generally not included and the subcommittee’s requests brought those to the fore. The Staff method appears to accept continuance of that practice. The Staff proposes to keep those separate, which differs from past practice which rolled all reserves together.
  2. Reserve targets are first set based on the historic volatility of enterprise net income. In other words, the reserves would be determined transparently with a rigorous method on the basis of the need for those reserves. The method uses a target that is statistically beyond the 99th percentile in the probability distribution. And this target can be readily updated for new information each year. The Staff report rejects this method to adopt a target that refers to the practice of other communities, and none of those practices appear to be based on analytic methods from research done by the subcommittee.
  3. Reserve targets are then adjusted to cover the largest single year capital improvement/replacement investment made historically to ensure enough cash for non-debt expenditures. Because the net income volatility is a joint function of revenues, operating expenditures and non-debt capital expenditures, the latter category is not separated out of the analysis. However, an added margin can be incorporated. That said, the data set for the fiscal years of 2008/2009 to 2016/2017 used by the subcommittee found that setting the target based on the volatility has been sufficient to date. The Staff report appears to call for a separate, unnecessary reserve fund for this purpose based on annual depreciation that has no relationship to risk exposure, and implicitly duplicates the debt payments already being made on these utility systems. This would be a wasteful duplication that sets the reserves too high.

The Finance and Budget raised at least two important issues in its review:

  1. Water and sewer usage and revenues may be correlated so that the reserves may be shared between the two funds. However, further review shows that the funds have a slight negative correlation, indicating that the reserves should be held separately.
  2. The water fund already has an implicit reserve source when a drought emergency is declared because a surcharge of 25% is added to water utility charges. I agree that this should be accounted for in the historic volatility analysis. This reduces the volatility in fiscal years 2014/2015 and 2015/2016, and reduces the water fund volatility reserve from 26% to 21%.
  3. Working cash reserves are unnecessary because the utility funds are already well established (not needing a start up reserve), and that the volatility reserves already cover any significant lags in the revenues that may occur. This observation is valid, and I agree that the working cash reserves are duplicative of the other reserve requirements. The working cash reserves should be eliminated from the reserve targets for this reason.

Finally, the Staff proposal raises an issue about the appropriate basis for determining the sanitation/waste removal reserve target. The Staff proposes to base it solely on direct City expenses. However, the enterprise fund balance shows a deficit that includes the revenues and expenses incurred by the contractor, first Davis Waste Removal and then Recology. We need more specificity on which party is bearing the risk of these shortfalls before determining the appropriate reserve target. Given the current City accounting stance that incorporates those shortfalls, I propose using the Utility Commission’s proposed method for now.

Based the analysis done by Utilities Commission subcommittee and the recommendations of the Finance & Budget Committee, the chart above shows the target % reserves for each fund without the debt coverage target. It also shows the % reserve targets implied by the Staff’s proposed method.[2] The chart also shows corresponding dollar amount for the proposed total target reserves, including the debt reserves, and the cash assets held for those funds in fiscal year 2016/2017. Importantly, this new reserve target shows that the City held about $30 million of excess reserves in 2016/2017.

[1] It appears the Staff may have misread the Utilities Commission’s recommendation memorandum and confused the proposed targets policies with the inferred existing policies. This makes it uncertain as to whether the Staff fully considered what had been proposed by the Utilities Commission.

[2] The amounts shown in the October 16, 2019 Staff Report on Item 6B do not appear to be consistent with the methodology shown in Table 1 of that report.

We’ve already paid for Diablo Canyon

As I wrote last week, PG&E is proposing that a share of Diablo Canyon nuclear plant output be allocated to community choice aggregators (CCAs) as part of the resolution of issues related to the Integrated Resource Plan (IRP), Resource Adequacy (RA) and Power Charge Indifference Adjustment (PCIA) rulemakings. While the allocation makes sense for CCAs, it does not solve the problem that PG&E ratepayers are paying for Diablo Canyon twice.

In reviewing the second proposed settlement on PG&E costs in 1994, we took a detailed look at PG&E’s costs and revenues at Diablo. Our analysis revealed a shocking finding.

Diablo Canyon was infamous for increasing in cost by more than ten-fold from the initial investment to coming on line. PG&E and ratepayer groups fought over whether to allow $2.3 billion dollars.  The compromise in 1988 was to essentially shift the risk of cost recovery from ratepayers to PG&E through a power purchase agreement modeled on the Interim Standard Offer Number 4 contract offered to qualifying facilities (but suspended as oversubscribed in 1985).

However, the contract terms were so favorable and rich to PG&E, that Diablo costs negatively impacted overall retail rates. These costs were a key contributing factor that caused industrial customers to push for deregulation and restructuring. As an interim solution in 1995 in anticipation of forthcoming restructuring, PG&E and ratepayer groups arrived at a new settlement that moved Diablo Canyon back into PG&E’s regulated ratebase, earning the utilities allowed return on capital. PG&E was allowed to keep 100% of profit collected between 1988 and 1995. The subsequent 1996 settlement made some adjustments but arrived at essentially the same result. (See Decision 97-05-088.)

While PG&E had borne the risks for seven years, that was during the plant startup and its earliest years of operation.  As we’ve seen with San Onofre NGS and other nuclear plants, operational reliability is most at risk late in the life of the plant. PG&E’s originally took on the risk of recovering its entire investment over the entire life of the plant.  The 1995 settlement transferred the risk for recovering costs over the remaining life of the plant back to ratepayers. In addition, PG&E was allowed to roll into rate base the disputed $2.3 billion. This shifted cost recovery back to the standard rate of depreciation over the 40 year life of the NRC license. In other words, PG&E had done an end-run on the original 1988 settlement AND got to keep the excess profits.

The fact that PG&E accelerated its investment recovery over the first seven years and then shifted recovery risk to ratepayers implies that PG&E should be allowed to recover only the amount that it would have earned at a regulated return under the original 1988 settlement. This is equal to the discounted net present value of the net income earned by Diablo Canyon, over both the periods of the 1988 (PPA) and 1995 settlements.

In 1996, we calculated what PG&E should be allowed to recover in the settlement given this premise.  We assumed that PG&E would be allowed to recover the disputed $2.3 billion because it had taken on that risk in 1988, but the net income stream should be discounted at the historic allowed rate of return over the seven year period.  Based on these assumptions, PG&E had recovered its entire $7.7 billion investment by October 1997, just prior to the opening of the restructured market in March 1998.  In other words, PG&E shareholders were already made whole by 1998 as the cost recovery for Diablo was shifted back to ratepayers.  Instead the settlement agreement has caused ratepayers to pay twice for Diablo Canyon.

PG&E has made annual capital additions to continue operation at Diablo Canyon since then and a regulated return is allowed under the regulatory compact.  Nevertheless, the correct method for analyzing the potential loss to PG&E shareholders from closing Diablo is to first subtract $5.1 billion from the plant in service, reducing the current ratebase to capital additions incurred since 1998. This would reduces the sunk costs that are to be recovered in rates from $31 to $3 per megawatt-hour.

Note that PG&E shareholders and bondholders have earned a weighted return of approximately 10% annually on this $5.1 billion since 1998. The compounded present value of that excess return was $18.1 billion by 2014 earned by PG&E.

CCAs don’t undermine their mission by taking a share of Diablo Canyon

Northern California community choice aggregators (CCAs) are considering whether to accept an offer from PG&E to allocate a proportionate share of its “large carbon-free” generation as a credit against the power charge indifference adjustment (PCIA) exit fee.  The allocation would include a share of Diablo Canyon power. The allocation for 2019 and 2020; an extension of this allocation is being discussed on the PCIA rulemaking.

The proposal faces opposition from anti-nuclear and local community activists who point to the policy adopted by many CCAs not to accept any nuclear power in their portfolios. However, this opposition is misguided for several reasons, some of which are discussed in this East Bay Community Energy staff report.

  • The CCAs already receive and pay for nuclear generation as part of the mix of “unspecified” power that the CCAs buy through the California Independent System Operator (CAISO). The entire cost of Diablo Canyon is included in the Total Portfolio Cost used to calculate the PCIA. The CCAs receive a “market value” credit against this generation, but the excess cost of recovering the investment in Diablo Canyon (for which PG&E is receiving double payment based on calculations I made in 1996) is recovered through the PCIA. The CCAs can either continue to pay for Diablo through the PCIA without receiving any direct benefits, or they can at least gain some benefits and potentially lower their overall costs. (CCAs need to be looking at their TOTAL generation costs, not just their individual portfolio, when resource planning.)
  • Diablo Canyon is already scheduled to close Unit 1 in 2024 and Unit 2 in 2025 after a contentious proceeding. This allocation is unlikely to change this decision as PG&E has said that the relicensed plant would cost in excess of $100 per megawatt-hour, well in excess of its going market value. I have written extensively here about how costly nuclear power has been and has yet to show that it can reduce those costs. Unless the situation changes significantly, Diablo Canyon will close then.
  • Given that Diablo is already scheduled for closure, the California Public Utilities Commission (CPUC) is unlikely to revisit this decision. But even so, a decision to either reopen A.16-08-006 or to open a new rulemaking or application would probably take close to a year, so the proceeding probably would not open until almost 2021. The actual proceeding would take up to a year, so now we are to 2022 before an actual decision. PG&E would have to take up to a year to plan the closure at that point, which then takes us to 2023. So at best the plant closes a year earlier than currently scheduled. In addition, PG&E still receives the full payments for its investments and there is likely no capital additions avoided by the early closure, so the cost savings would be minimal.

Nuclear vs. storage: which is in our future?

Two articles with contrasting views of the future showed up in Utility Dive this week. The first was an opinion piece by an MIT professor referencing a study he coauthored comparing the costs of an electricity network where renewables supply more than 40% of generation compared to using advanced nuclear power. However, the report’s analysis relied on two key assumptions:

  1. Current battery storage costs are about $300/kW-hr and will remain static into the future.
  2. Current nuclear technology costs about $76 per MWh and advanced nuclear technology can achieve costs of $50 per MWh.

The second article immediately refuted the first assumption in the MIT study. A report from BloombergNEF found that average battery storage prices fell to $156/kW-hr in 2019, and projected further decreases to $100/kW-hr by 2024.

The reason that this price drop is so important is that, as the MIT study pointed out, renewables will be producing excess power at certain times and underproducing during other peak periods. MIT assumes that system operators will have to curtail renewable generation during low load periods and run gas plants to fill in at the peaks. (MIT pointed to California curtailing about 190 GWh in April. However, that added only 0.1% to the CAISO’s total generation cost.) But if storage is so cheap, along with inexpensive solar and wind, additional renewable capacity can be built to store power for the early evening peaks. This could enable us to free ourselves from having to plan for system peak periods and focus largely on energy production.

MIT’s second assumption is not validated by recent experience. As I posted earlier, the about to be completed Vogtle nuclear plant will cost ratepayers in Georgia and South Carolina about $100 per MWh–more than 30% more than the assumption used by MIT. PG&E withdrew its relicensing request for Diablo Canyon because the utility projected the cost to be $100 to $120 per MWh. Another recent study found nuclear costs worldwide exceeded $100/MWh and it takes an average of a decade finish a plant.

Another group at MIT issued a report earlier intended to revive interest in using nuclear power. I’m not sure of why MIT is so focused on this issue and continuing to rely on data and projections that are clearly outdated or wrong, but it does have one of the leading departments in nuclear science and engineering. It’s sad to see that such a prestigious institution is allowing its economic self interest to cloud its vision of the future.

What do you see in the future of relying on renewables? Is it economically feasible to build excess renewable capacity that can supply enough storage to run the system the rest of the day? How would the costs of this system compare to nuclear power at actual current costs? Will advanced nuclear power drop costs by 50%? Let us know your thoughts and add any useful references.

Microgrids could cost 10% of undergrounding PG&E’s wires

One proposed solution to reducing wildfire risk is for PG&E to put its grid underground. There are a number of problems with undergrounding including increased maintenance costs, seismic and flooding risks, and problems with excessive heat (including exploding underground vaults). But ignoring those issues, the costs could be exorbitant-greater than anyone has really considered. An alternative is shifting rural service to microgrids. A high-level estimate shows that using microgrids instead could cost less than 10% of undergrounding the lines in regions at risk. The CPUC is considering a policy shift to promote this type of solution and has new rulemaking on promoting microgrids.

We can put this in context by estimating costs from PG&E’s data provided in its 2020 General Rate Case, and comparing that to its total revenue requirements. That will give us an estimate of the rate increase needed to fund this effort.

PG&E has about 107,000 miles of distribution voltage wires and 18,500 in transmission lines. PG&E listed 25,000 miles of distribution lines being in wildfire risk zones. The the risk is proportionate for transmission this is another 4,300 miles. PG&E has estimated that it would cost $3 million per mile to underground (and ignoring the higher maintenance and replacement costs). And undergrounding transmission can cost as much as $80 million per mile. Using estimates provided to the CAISO and picking the midpoint cost adder of four to ten times for undergrounding, we can estimate $25 million per mile for transmission is reasonable. Based on these estimates it would cost $75 billion to underground distribution and $108 billion for transmission, for a total cost of $183 billion. Using PG&E’s current cost of capital, that translates into annual revenue requirement of $9.1 billion.

PG&E’s overall annual revenue requirement are currently about $14 billion and PG&E has asked for increases that could add another $3 billion. Adding $9.1 billion would add two-thirds (~67%) to PG&E’s overall rates that include both distribution and generation. It would double distribution rates.

This begs two questions:

  1. Is this worth doing to protect properties in the affected urban-wildlands interface (UWI)?
  2. Is there a less expensive option that can achieve the same objective?

On the first question, if we look the assessed property value in the 15 counties most likely to be at risk (which includes substantial amounts of land outside the UWI), the total assessed value is $462 billion. In other words, we would be spending 16% of the value of the property being protected. The annual revenue required would increase property taxed by over 250%, going from 0.77% to 2.0%.

Which turns us to the second question. If we assume that the load share is proportionate to the share of lines at risk, PG&E serves about 18,500 GWh in those areas. The equivalent cost per unit for undergrounding would be $480 per MWh.

The average cost for a microgrid in California based on a 2018 CEC study is $3.5 million per megawatt. That translates to $60 per MWh for a typical load factor. In other words a microgrid could cost one-eighth of undergrounding. The total equivalent cost compared to the undergrounding scenario would be $13 billion. This translates to an 8% increase in PG&E rates.

To what extent should we pursue undergrounding lines versus shifting to microgrid alternatives in the WUI areas? Should we encourage energy independence for these customers if they are on microgrids? How should we share these costs–should locals pay or should they be spread over the entire customer base? Who should own these microgrids: PG&E or CCAs or a local government?