Author Archives: Richard McCann

About Richard McCann

Partner in M.Cubed, an economics and policy consulting firm.

Why are we punishing customers for doing the right thing?

The saying goes “No good deed goes unpunished.” The California Public Utilities Commission seems to have taken that motto to heart recently, and stands ready to penalize yet another group of customers who answered the clarion call to help solve the state’s problems by radically altering the rules for solar rooftops. Here’s three case studies of recent CPUC actions that undermine incentives for customers to act in the future in response to state initiatives: (1) farmers who invested in response to price incentives, (2) communities that pursued renewables more assertively, and (3) customers who installed solar panels.

Agriculture: Farmers have responded to past time of use (TOU) rate incentives more consistently and enthusiastically than any other customer class. Instead of being rewarded for their consistency, their peak price periods shifted from the afternoon to the early evening. Growers face much more difficulty in avoiding pumping during that latter period.

Since TOU rates were introduced to agricultural customers in the late 1970s, growers have made significant operational changes in response to TOU differentials between peak and off-peak energy prices to minimize their on-peak consumption. These include significant investments in irrigation equipment, storage and conveyance infrastructure and labor deployment rescheduling. The results of these expenditures are illustrated in the figure below, which shows how agricultural loads compare with system-wide load on a peak summer weekday in 2015, contrasting hourly loads to the load at the coincident peak hour. Both the smaller and larger agricultural accounts perform better than a range of representative rate schedules. Most notably agriculture’s aggregate load shape on a summer weekday is inverted relative to system peak, i.e., the highest agricultural loads occur during the lowest system load periods, in contrast with other rate classes.

All other rate schedules shown in the graphic hit their annual peak on the same peak day within the then-applicable peak hours of noon to 6 p.m. In contrast, agriculture electricity demand is less than 80% of its annual peak during those high-load hours, with its daily peak falling outside the peak period. Agriculture’s avoidance of peak hours occurred during the summer agricultural growing season, which coincided with peak system demand—just as the Commission asked customers to do. The Commission could not ask for a better aggregate response to system needs; in contrast to the profiles for all of the other customer groups, agriculture has significantly contributed to shifting the peak to a lower cost evening period.

The significant changes in the peak period price timing and differential that the CPUC adopted increases uncertainty over whether large investments in high water-use efficiency microdrip systems – which typically cost $2,000 per acre–will be financially viable. Microdrip systems have been adopted widely by growers over the last several years—one recent study of tomato irrigation rates in Fresno County could not find any significant quantity of other types of irrigation systems. Such systems can be subject to blockages and leaks that are only detectable at start up in daylight. Growers were able to start overnight irrigation at 6 p.m. under the legacy TOU periods and avoid peak energy use. In addition, workers are able to end their day shortly after 6 p.m. and avoid nighttime accidents. Shifting that load out of the peak period will be much more difficult to do with the peak period ending after sunset.

Contrary to strong Commission direction to incent customers to avoid peak power usage, the shift in TOU periods has served to penalize, and reverse, the great strides the agricultural class has made benefiting the utility system over the last four decades.

Community choice aggregators: CCAs were created, among other reasons, to develop more renewable or “green” power. The state achieved its 2020 target of 33% in large part because of the efforts of CCAs fostered through offerings of 50% and 100% green power to retail customers. CCAs also have offered a range of innovative programs that go beyond the offerings of PG&E, SCE and SDG&E.

Nevertheless, the difficulty of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.

The portfolios of the incumbent utilities are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)

The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.

But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.

Solar net energy metered customers: Distributed solar generation installed under California’s net energy metering (NEM/NEMA) programs has mitigated and even eliminated load and demand growth in areas with established customers. This benefit supports protecting the investments that have been made by existing NEM/NEMA customers. Similarly, NEM/NEMA customers can displace investment in distribution assets. That distribution planners are not considering this impact appropriately is not an excuse for failing to value this benefit. For example, PG&E’s sales fell by 5% from 2010 to 2018 and other utilities had similar declines. Peak loads in the CAISO balancing authority reach their highest point in 2006 and the peak in August 2020 was 6% below that level.

Much of that decrease appears to have been driven by the installation of rooftop solar. The figure above illustrates the trends in CAISO peak loads in the set of top lines and the relationship to added NEM/NEMA installations in the lower corner. It also shows the CEC’s forecast from its 2005 Integrated Energy Policy Report as the top line. Prior to 2006, the CAISO peak was growing at annual rate of 0.97%; after 2006, peak loads have declined at a 0.28% trend. Over the same period, solar NEM capacity grew by over 9,200 megawatts. The correlation factor or “R-squared” between the decline in peak load after 2006 and the incremental NEM additions is 0.93, with 1.0 being perfect correlation. Based on these calculations, NEM capacity has deferred 6,500 megawatts of capacity additions over this period. Comparing the “extreme” 2020 peak to the average conditions load forecast from 2005, the load reduction is over 11,500 megawatts. The obvious conclusion is that these investments by NEM customers have saved all ratepayers both reliability and energy costs while delivering zero-carbon energy.

The CPUC now has before it a rulemaking in which the utilities and some ratepayer advocates are proposing to not only radically reduce the compensation to new NEM/NEMA customers but also to change the terms of the agreements for existing ones.

One of the key principles of providing financial stability is setting prices and rates for long-lived assets such as solar panels and generation plants at the economic value when the investment decision was made to reflect the full value of the assets that would have been acquired otherwise.  If that new resource had not been built, either a ratebased generation asset would have been constructed by the utility at a cost that would have been recovered over a standard 30-year period or more likely, additional PPAs would have been signed. Additionally, the utilities’ investments and procurement costs are not subject to retroactive ratemaking under the rule prohibiting such ratemaking and Public Utilities Code Section 728, thus protecting shareholders from any risk of future changes in state or Commission policies.

Utility customers who similarly invest in generation should be afforded at least the same assurances as the utilities with respect to protection from future Commission decisions that may diminish the value of those investments. Moreover, customers do not have the additional assurances of achieving a certain net income so they already face higher risks than utility shareholders for their investments.

Generators are almost universally afforded the ability to recover capital investments based on prices set for multiple years, and often the economic life of their assets. Utilities are able to put investments in ratebase to be recovered at a fixed rate of return plus depreciation over several decades. Third-party generators are able to sign fixed price contracts for 10, 20, and even 40 years. Some merchant generators may choose to sell only into the short-term “hourly” market, but those plants are not committed to selling whenever the CAISO demands so. Generators are only required to do so when they sign a PPA with an assured payment toward investment recovery.

Ratepayers who make investments that benefit all ratepayers over the long term should be offered tariffs that provide a reasonable assurance of recovery of those investments, similar to the PPAs offered to generators. Ratepayers should be able to gain the same assurances as generators who sign long-term PPAs, or even utilities that ratebase their generation assets, that they will not be forced to bear all of the risk of investing of clean self-generation. These ratepayers should have some assurance over the 20-plus year expected life of their generation investment.

What to do about Diablo Canyon?

The debate over whether to close Diablo Canyon has resurfaced. The California Public Utilities Commission, which support from the Legislature, decided in 2018 to close Diablo by 2025 rather than proceed to relicensing. PG&E applied in 2016 to retire the plant rather than relicense due to the high costs that would make the energy uneconomic. (I advised the Joint CCAs in this proceeding.)

Now a new study from MIT and Stanford finds potential savings and emission reductions from continuing operation. (MIT in particular has been an advocate for greater use of nuclear power.) Others have written opinion articles on either side of the issue. I wrote the article below in the Davis Enterprise addressing this issue. (It was limited to 900 words so I couldn’t cover everything.)

IT’S OK TO CLOSE DIABLO CANYON NUCLEAR PLANT
A previous column (by John Mott-Smith) asked whether shutting down the Diablo Canyon nuclear plant is risky business if we don’t know what will replace the electricity it produces. John’s friend Richard McCann offered to answer his question. This is a guest column, written by Richard, a universally respected expert on energy, water and environmental economics.

John Mott-Smith asked several questions about the future of nuclear power and the upcoming closure of PG&E’s Diablo Canyon Power Plant in 2025. His main question is how are we going to produce enough reliable power for our economy’s shift to electricity for cars and heating. The answers are apparent, but they have been hidden for a variety of reasons.
I’ve worked on electricity and transportation issues for more than three decades. I began my career evaluating whether to close Sacramento Municipal Utility District’s Rancho Seco Nuclear Generating Station and recently assessed the cost to relicense and continue operations of Diablo after 2025.
Looking first at Diablo Canyon, the question turns almost entirely on economics and cost. When the San Onofre Nuclear Generating Station closed suddenly in 2012, greenhouse gas emissions rose statewide the next year, but then continued a steady downward trend. We will again have time to replace Diablo with renewables.
Some groups focus on the risk of radiation contamination, but that was not a consideration for Diablo’s closure. Instead, it was the cost of compliance with water quality regulations. The power plant currently uses ocean water for cooling. State regulations required changing to a less impactful method that would have cost several billion dollars to install and would have increased operating costs. PG&E’s application to retire the plant showed the costs going forward would be at least 10 to 12 cents per kilowatt-hour.
In contrast, solar and wind power can be purchased for 2 to 10 cents per kilowatt-hour depending on configuration and power transmission. Even if new power transmission costs 4 cents per kilowatt-hour and energy storage adds another 3 cents, solar and wind units cost about 3 cents, which totals at the low end of the cost for Diablo Canyon.
What’s even more exciting is the potential for “distributed” energy resources, where generation and power management occurs locally, even right on the customers’ premises rather than centrally at a power plant. Rooftop solar panels are just one example—we may be able to store renewable power practically for free in our cars and trucks.
Automobiles are parked 95% of the time, which means that an electric vehicle (EV) could store solar power at home or work during the day and for use at night. When we get to a vehicle fleet that is 100% EVs, we will have more than 30 times the power capacity that we need today. This means that any individual car likely will only have to use 10% of its battery capacity to power a house, leaving plenty for driving the next day.
With these opportunities, rooftop and community power projects cost 6 to 10 cents per kilowatt-hour compared with Diablo’s future costs of 10 to 12 cents.
Distributed resources add an important local protection as well. These resources can improve reliability and resilience in the face of increasing hazards created by climate change. Disruptions in the distribution wires are the cause of more than 95% of customer outages. With local generation, storage, and demand management, many of those outages can be avoided, and electricity generated in our own neighborhoods can power our houses during extreme events. The ad that ran during the Olympics for Ford’s F-150 Lightning pick-up illustrates this potential.
Opposition to this new paradigm comes mainly from those with strong economic interests in maintaining the status quo reliance on large centrally located generation. Those interests are the existing utilities, owners, and builders of those large plants plus the utility labor unions. Unfortunately, their policy choices to-date have led to extremely high rates and necessitate even higher rates in the future. PG&E is proposing to increase its rates by another third by 2024 and plans more down the line. PG&E’s past mistakes, including Diablo Canyon, are shown in the “PCIA” exit fee that [CCA] customers pay—it is currently 20% of the rate. Yolo County created VCEA to think and manage differently than PG&E.
There may be room for nuclear generation in the future, but the industry has a poor record. While the cost per kilowatt-hour has gone down for almost all technologies, even fossil-fueled combustion turbines, that is not true for nuclear energy. Several large engineering firms have gone bankrupt due to cost overruns. The global average cost has risen to over 10 cents per kilowatt-hour. Small modular reactors (SMR) may solve this problem, but we have been promised these are just around the corner for two decades now. No SMR is in operation yet.
Another problem is management of radioactive waste disposal and storage over the course of decades, or even millennia. Further, reactors fail on a periodic basis and the cleanup costs are enormous. The Fukuyama accident cost Japan $300 to $750 billion. No other energy technology presents such a degree of catastrophic failure. This liability needs to be addressed head on and not ignored or dismissed if the technology is to be pursued.

Considerations for designing groundwater markets

The California Water Commission staff asked a group of informed stakeholders and experts about “how to shape well-managed groundwater trading programs with appropriate safeguards for communities, ecosystems, and farms.” I submitted the following essay in response to a set of questions.

In general, setting up functioning and fair markets is a more complex process than many proponents envision. Due to the special characteristics of water that make location particularly important, water markets are likely to be even more complex, and this will require more thinking to address in a way that doesn’t stifle the power of markets.

Anticipation of Performance

  1. Market power is a concern in many markets. What opportunities or problems could market power create for overall market performance or for safeguarding? How is it likely to manifest in groundwater trading programs in California?

I was an expert witness on behalf of the California Parties in the FERC Energy Crisis proceeding in 2003 after the collapse of California’s electricity market in 2000-2001. That initial market arrangement failed for several reasons that included both exploitations of traits of internal market functions and limitations on outside transactions that enhanced market power. An important requirement that can mitigate market power is the ability to sign long-term agreements that then reduces the amount of resources that are open to market manipulation. Clear definitions of resource accounting used in transactions is a second important element. And lowering transaction costs and increasing liquidity are a third element. Note that confidentiality has not prevented market gaming in electricity markets.

Groundwater provides a fairly frequent opportunity for exploitation of market power with recurrence of dry and drought conditions. The analogy for electricity is during peak load conditions. Prices in the Texas ERCOT market went up 30,000 fold last February during such a shortage. Droughts in California happen more frequently than freezes in Texas.

The other dimension is that often a GSA has a concentration of a small number of property owners. This small concentration eases the ability to manipulate prices even if buyers and sellers are anonymous. This situation is what led to the crisis in the CAISO market. (I was able beforehand to calculate the minimum generation capacity ownership required to profitably manipulate prices, and it was an amount held by many of the merchant generators in the market.) Those larger owners are also the ones most likely to have the resources to participate in certain types of market designs due to higher transaction costs that act as barriers.

2. Given a configuration of market rules, how well can impacts to communities, the environment, and small farmers be predicted?

The impacts can be fairly well assessed with sufficient modeling with inclusion of three important pieces of information. The first is a completely structured market design that can be tested and modeled. The second is a relatively accurate assessment of the costs of individuals entities to participate in such a market. And the third is modelling the variation in groundwater depth to assess the likelihood of those swings exceeding current well depths for these groups.

Safeguards

3. What rules are needed to safeguard these water users? If not through market mechanisms directly, how could or should these users be protected?

These groups should not participate in shorter term groundwater trading markets such as for annual allocations unless they proactively elect to do so. They are unlikely to have the resources to participate in an usefully informed way. Instead, the GSAs should carve allocations out of the sustainable yields that are then distributed in any number of methods that include bidding for long run allocations as well as direct allowances.

For tenant farmers, restrictions on landlords’ participation in short-term markets should be implemented. This can be specified either through quantity limits, long term contracting requirements or time windows for guaranteed supplies to tenants that match with lease terms.

4. What other kinds of oversight, monitoring, and evaluation of markets are needed to safeguard? Who should perform these functions?

These markets will likely require oversight to prevent market manipulation. Instituting market monitors akin to those who now oversee the CAISO electricity and the CARB GHG Allowance auctions is potential approach. The state would most likely be the appropriate institution to provide this service. The functions for those monitors are well delineated by those other agencies. The single most important requirement for this function is a clear authority and willingness to enforce meaningful actions as a consequence of violations.

5. Groundwater trading programs could impact markets for agricultural commodities, land, labor, or more. To what degree could the safeguards offered by groundwater trading programs be undermined through the programs’ interactions with other markets? How should other markets be considered?

These interactions among different markets are called pecuniary externalities, and economists consider these as intended consequences of using market mechanisms to change behavior and investments across markets. For example, establishing prices for groundwater most likely will change both cropping decisions and irrigation practices, which in turn will impact both equipment and service dealers and labor. Safeguards must be established in ways that do not directly affect these impacts—to do otherwise defeats the very purpose of setting up markets in the first place. People will be required to change from their current practices and choices as a result of instituting these markets.

Mitigation of adverse consequences should account for catastrophic social outcomes to individuals and businesses that are truly outside of their control. SGMA, and associated groundwater markets, are intended to create economic benefits for the larger community. A piece often missing from the social benefit-cost assessment that leads to the adoption of these programs is compensation to those who lose economically from the change. For example, conversion from a labor intensive crop to a less water intensive one could reduce farm labor demand. Those workers should be paid compensation from a public pool of beneficiaries.

6. Should safeguarding take common forms across all of the groundwater trading programs that may form in California? To the degree you think it would help, what level of detail should a common framework specify?

Localities generally do not have either the resources, expertise or sufficient incentives to manage these types of safeguards. Further the safeguards should be relatively uniform across the region to avoid creating inadvertent market manipulation opportunities among different groundwater markets. (That was one of the means of exploiting CAISO market in 2000-01.) The level of detail will depend on other factors that can be identified after potential market structures are developed and a deeper understanding is prepared.

7. Could transactions occurring outside of a basin or sub-basin’s groundwater trading program make it harder to safeguard? If so, what should be done to address this?

The most important consideration is the interconnection with surface water supplies and markets. Varying access to surface water will affect the relative ability to manipulate market supplies and prices. The emergence of the NASDAQ Veles water futures market presents another opportunity to game these markets.

Among the most notorious market manipulation techniques used by Enron during the Energy Crisis was one called “Ricochet” that involved sending a trade out of state and then returning down a different transmission line to create increased “congestion.” Natural gas market prices were also manipulated to impact electricity prices during the period. (Even the SCAQMD RECLAIM market may have been manipulated.) It is possible to imagine a similar series of trades among groundwater and surface water markets. It is not always possible to identify these types of opportunities and prepare mitigation until a full market design is prepared—they are particular to situations and general rules are not easily specified.

Performance Indicators and Adaptive Management

8. Some argue that market rules can be adjusted in response to evidence a market design did not safeguard. What should the rules for changing the rules be?

In general, changing the rules for short term markets, e.g., trading annual allocations, should be relatively easy. Investors should not be allowed to profit from market design flaws no matter how much they have spent. Changes must be carefully considered but they also should not be easily impeded by those who are exploiting those flaws, as was the case in fall of 2000 for California’s electricity market.

Comparing cost-effectiveness of undergrounding vs. microgrids to mitigate wildfire risk

Pacific Gas & Electric has proposed to underground 10,000 miles of distribution lines to reduce wildfire risk, at an estimated cost of $1.5 to $2 million per mile. Meanwhile PG&E has installed fast-trip circuit breakers in certain regions to mitigate fire risks from line shorts and breaks, but it has resulted in a vast increase in customer outages. CPUC President Batjer wrote in an October 25 letter to PG&E, “[s]ince PG&E initiated the Fast Trip setting practice on 11,500 miles of lines in High Fire Threat Districts in late July, it has caused over 500 unplanned power outages impacting over 560,000 customers.” She then ordered a series of compliance reports and steps. The question is whether undergrounding is the most cost-effective solution that can be implemented in a timely manner.

A viable alternative is microgrids, installed at either individual customers or community scale. The microgrids could be operated to island customers or communities during high risk periods or to provide backup when circuit breakers cut power. Customers could continue to be served outside of either those periods of risk or weather-caused outages.

Because microgrids would be installed solely for the purpose of displacing undergrounding, the relative costs should be compared without considering any other services such as energy delivered outside of periods of fire risk or outages or increased green power.

I previously analyzed this question, but this updated assessment uses new data and presents a threshold at which either undergrounding or microgrids is preferred depending on the range of relative costs.

We start with the estimates of undergrounding costs. Along with PG&E’s stated estimate, PG&E’s 2020 General Rate Case includes a settlement agreement with a cost of $4.8 million per mile. That leads to an estimate of $15 to $48 million. Adding in maintenance costs of about $400 million annually, this revenue requirement translates to a rate increase of 3.2 to 9.3 cents per kilowatt-hour.

For microgrid costs, the National Renewable Energy Laboratory published estimated ranges for both (1) commercial or community scale projects of 1 megawatt with 2.4 megawatt-hours of storage and (2) residential scale of 7 kilowatts with 20 kilowatt-hours of storage. For larger projects, NREL shows ranges of $2.07 to $2.13 million; we include an upper end estimate double of NREL’s top range. For residential; the range is $36,000 to $38,000.

Using this information, we can make comparisons based on the density of customers or energy use per mile of targeted distribution lines. In other words, we can determine if its more cost-effective to underground distribution lines or install microgrids based on how many customers or how much load is being served on a line.

As a benchmark, PG&E’s average system density per mile of distribution line is 50.6 customers and 166 kW (or 0.166 MW).

The table below shows the relative cost effectiveness for undergrounding compared to community/commercial microgrids. If the load density falls below the value shown, microgrids are more cost effective. Note that the average density across the PG&E service area is 0.166 MW which is below any of the thresholds. That indicates that such microgrids should be cost-effective in most rural areas.

The next table shows the relative cost effectiveness for individual residential microgrids, and again if the customer density falls below the threshold shown, then microgrids save more costs. The average density for service area is 51 customers per line-mile which reflects the concentration of population in the Bay Area. At the highest undergrounding costs, microgrids are almost universally favored. In rural areas where density falls below 30 customers per line-mile, microgrids are less costly at the lower undergrounding costs.

PG&E has installed two community-scale microgrids in remote locations so far, and reportedly considering 20 such projects. However, PG&E fell behind on those projects, prompting the CPUC to reopen its procurement process in its Emergency Reliability rulemaking. In addition, PG&E has relied heavily on natural gas generation for these.

PG&E simply may not have the capacity to construct either microgrids or install undergrounded lines in a timely manner solely through its organization. PG&E already is struggling to meet its targets for converting privately-owned mobilehome park utility systems to utility ownership. A likely better choice is to rely on local governments working in partnership with PG&E to identify the most vulnerable lines to construct and manage these microgrids. The residential microgrids would be operated remotely. The community microgrids could be run under several different models including either PG&E or municipal ownership.

Why Californians aren’t meeting the state’s call for more water conservation

Governor Gavin Newsom called for a voluntary reduction in water use of 15% in July in response to the second year of a severe drought. The latest data from the State Water Resources Control Board showed little response on the part of the citizenry and the media lamented the lack of effort. However, those reports overlooked a major reason for a lack of further conservation.

The SWRCB conservation reports data shows that urban Californians are still saving 15% below the 2013 benchmark used in the last drought. So a call for another 15% on top of that translates to a 27% reduction from the same 2013 baseline. Californian’s have not heard that this drought is worse than 2015 yet the state is calling for a more drastic overall reduction. Of course we aren’t seeing an even further reduction without a much stronger message.

In 2015 to get to a 25% reduction, the SWRCB adopted a set of regulations with concomitant penalties which pretty much achieved the intended target. But that effort required a combination of higher rates and increased expenditures by water agencies. It will take a similar effort to move the needle again.

The scale economy myth of electric utilities

Vibrant Clean Energy released a study showing that inclusion of large amounts of distributed energy resources (DERs) can lower the costs of achieving 100% renewable energy. Commentors here have criticized the study for several reasons, some with reference to the supposed economies of scale of the grid.

While economies of scale might hold for individual customers in the short run, the data I’ve been evaluating for the PG&E and SCE general rate cases aren’t necessarily consistent with that notion. I’ve already discussed here the analysis I conducted in both the CAISO and PJM systems that show marginal transmission costs that are twice the current transmission rates. The rapid rise in those rates over the last decade are consistent with this finding. If economies of scale did hold for the transmission network, those rates should be stable or falling.

On the distribution side, the added investment reported in those two utilities’ FERC Form 1 are not consistent with the marginal costs used in the GRC filings. For example the added investment reported in Form 1 for final service lines (transmission, services, meters or TSM) appears to be almost 10 times larger than what is implied by the marginal costs and new customers in the GRC filings. And again the average cost of distribution is rising while energy and peak loads have been flat across the CAISO area since 2006. The utilities have repeatedly asked for $2 billion each GRC for “growth” in distribution, but given the fact that load has been flat (and even declining in 2019 and 2020), that means there’s likely a significant amount of stranded distribution infrastructure. If that incremental investment is for replacement (which is not consistent with either their depreciation schedules or their assertions about the true life of their facilties and the replacement costs within their marginal cost estimates), then they are grossly underestimating the future replacement cost for facilities which means they are underestimating the true marginal costs.

I can see a future replacement liability right outside my window. The electric poles were installed by PG&E 60+ years ago and the poles are likely reaching the end of their lives. I can see the next step moving to undergrounding the lines at a cost of $15,000 to $25,000 per house based on the ongoing mobilehome conversion program and the typical Rule 20 undergrounding project. Deferring that cost is a valid DER value. We will have to replace many services over the next several decades. And that doesn’t address the higher voltage parts of the system.

We have a counterexample of a supposed monopoly in the cable/internet system. I have at least two competing options where I live. The cell phone network also turned out not to be a natural monopoly. In an area where the PG&E and Merced ID service territories overlap, there are parallel distribution systems. The claim of a “natural monopoly” more likely is a legal fiction that protects the incumbent utility and is simpler for local officials to manage when awarding franchises.

If the claim of natural monopolies in electricity were true, then the distribution rate components for SCE and PG&E should be much lower than for smaller munis such as Palo Alto or Alameda. But that’s not the case. The cost advantages for SMUD and Roseville are larger than can be simply explained by differences in cost of capital. The Division/Office of Ratepayer Advocates commissioned a study by Christensen Associates for PG&E’s 1999 GRC that showed that the optimal utility size was about 500,000 customers. (PG&E’s witness who was a professor at UC Berkeley inadvertently confirmed the results and Commissioner Richard Bilas, a Ph.D. economist, noted this in his proposed decision which was never adopted because it was short circuited by restructuring.) Given that finding, that means that the true marginal cost of a customer and associated infrastructure is higher than the average cost. The likely counterbalancing cause is an organizational diseconomy of scale that overwhelms the technological benefits of size.

Finally, generation no longer shows the economies of scale that dominated the industry. The modularity of combined cycle plants and the efficiency improvement of CTs started the industry down the rode toward the efficiency of “smallness.” Solar plants are similarly modular. The reason why additional solar generation appears so low cost is because much of that is from adding another set of panels to an existing plant while avoiding additional transmission interconnection costs (which is the lion’s share of the costs that create what economies of scale do exist.)

The VCE analysis looks a holistic long term analysis. It relies on long run marginal costs, not the short run MCs that will never converge on the LRMC due to the attributes of the electricity system as it is regulated. The study should be evaluated in that context.

Electric vehicles as the next smartphone

In 2006 a cell phone was portable phone that could send text messages. It was convenient but not transformative. No one seriously thought about dropping their landlines.

And then the iPhone arrived. Almost overnight consumers began to use it like their computer. They emailed, took pictures and sent them to their friends, then searched the web, then played complex games and watched videos. Social media exploded and multiple means of communicating and sharing proliferated. Landlines (and cable) started to disappear, and personal computer sales slowed. (And as a funny side effect, the younger generation seemed to quit talking on the phone.) The cell phone went from a means of one-on-one communication to a multi-faceted electronic tool that has become our pocket computer.

The U.S. population owning a smartphone has gone from 35% to 85% in the last decade. We could achieve similar penetration rates for electric vehicles (EVs) if we rethink and repackage how we market EVs to become our indispensable “energy management tool.” EVs can offer much more than conventional cars and we need to facilitate and market these advantages to sell them much faster.

EV pickups with spectacular features are about to be offered. These EVs may be a game changer for a different reason than what those focused on transportation policy think of–they offer households the opportunity for near complete energy independence. These pick ups have both enough storage capacity to power a house for several days and are designed to supply power to many other uses, not just driving. Combined with solar panels installed both at home and in business lots, the trucks can carry energy back and forth between locations. This has an added benefit of increasing reliability (local distribution outages are 15 times more likely than system levels ones) and resilience in the face of increasing extreme events.

This all can happen because cars are parked 90-95% of the time. That offers power source reliability in the same range as conventional generation, and the dispersion created by a portfolio of smaller sources further enhances that availability. Another important fact is that the total power capacity for autos on California’s road is over 2,000 gigawatts. Compared to California’s peak load of about 63 gigawatts, this is more than 30 times more capacity than we need. If we simply get to 20% penetration of EVs of which half have interconnective control abilities, we’ll have three times more capacity than we would need to meet our highest demands. There are other energy management issues, but solving them are feasible when we realize there will not be a real physical constraint.

Further, used EV batteries can be used as stationary storage, either in home or at renewable generation to mitigate transmission investments. EVs can transport energy between work and home from solar panels.

The difference between these EVs and the current models is akin to the difference between flip phones and smart phones. One is a single function device and the we use the latter to manage our lives. The marketing of EVs should shift course to emphasize these added benefits that are not possible with a conventional vehicle. The barriers are not technological, but only regulatory (from battery warranties and utility interconnection rules).

As part of this EV marketing focus, automakers should follow two strategies, both drawn from smart phones. The first is that EV pick ups should be leased as a means of keeping model features current. It facilitates rolling out industry standards quickly (like installing the latest Android update) and adding other yet-more attractive features. It also allows for more environmentally-friendly disposal of obsolete EVs. Materials can be more easily recycled and batteries no longer usable for driving (generally below 70% capacity) can be repurposed for stand-alone storage.

The second is to offer add on services. Smart phone companies have media streaming, data management and all sorts of other features beyond simple communication. Automakers can offer demand management to lower, or even eliminate, utility bills and appliance and space conditioning management placed onboard so a homeowner need not install a separate system that is not easily updated.

Part 2: A response to “Is Rooftop Solar Just Like Energy Efficiency?”

Severin Borenstein at the Energy Institute at Haas has written another blog post asserting that solar rooftop rates are inefficient and must changed radically. (I previously responded to an earlier post.) When looking at the efficiency of NEM rates, we need to look carefully at several elements of electricity market and the overall efficiency of utility ratemaking. We can see that we can come to a very different conclusion.

I filed testimony in the NEM 3.0 rulemaking last month where I calculated the incremental cost of transmission investment for new generation and the reduction in the CAISO peak load that looks to be attributable to solar rooftop.

  • Using FERC Form 1 and CEC powerplant data, I calculated that the incremental cost of transmission is $37/MWH. (And this is conservative due to a couple of assumptions I made.) Interestingly, I had done a similar calculation for AEP in the PJM interconnect and also came up with $37/MWH. This seems to be a robust value in the right neighborhood.
  • Load growth in California took a distinct change in trend in 2006 just as solar rooftop installations gained momentum. I found a 0.93 correlation between this change in trend and the amount of rooftop capacity installed. Using a simple trend, I calculated that the CAISO load decreased 6,000 MW with installation of 9,000 MW of rooftop solar. Looking at the 2005 CEC IEPR forecast, the peak reduction could be as large as 11,000 MW. CAISO also estimated in 2018 that rooftop solar displaced in $2.6 billion in transmission investment.

When we look at the utilities’ cost to acquire renewables and add in the cost of transmission, we see that the claim that grid-scale solar is so much cheaper than residential rooftop isn’t valid. The “green” market price benchmark used to set the PCIA shows that the average new RPS contract price in 2016 was still $92/MWH in 2016 and $74/MWH in 2017. These prices generally were for 30 year contracts, so the appropriate metric for comparing a NEM investment is against the vintage of RPS contracts signed in the year the rooftop project was installed. For 2016, adding in the transmission cost of $37/MWH, the comparable value is $129/MWH and in 2017, $111/MWH. In 2016, the average retail rates were $149/MWH for SCE, $183/MWH for PG&E and $205/MWH for SDG&E. (Note that PG&E’s rate had jumped $20/MWH in 2 years, while SCE’s had fallen $20/MWH.) In a “rough justice” way, the value of the displaced energy via rooftop solar was comparable to the retail rates which reflect the value of power to a customer, at least for NEM 1.0 and 2.0 customers. Rooftop solar was not “multiples” of grid scale solar.

These customers also took on investment risk. I calculated the payback period for a couple of customers around 2016 and found that a positive payback was dependent on utility rates rising at least 3% a year. This was not a foregone conclusion at the time because retail rates had actually be falling up to 2013 and new RPS contract prices were falling as well. No one was proposing to guarantee that these customers recover their investments if they made a mistake. That they are now instead benefiting is unwarranted hubris that ignores the flip side of the importance of investment risk–that investors who make a good efficient decision should reap the benefits. (We can discuss whether the magnitude of those benefits are fully warranted, but that’s a different one about distribution of income and wealth, not efficiency.)

Claiming that grid costs are fixed immutable amount simply isn’t a valid claim. SCE has been trying unsuccessfully to enact a “grid charge” with this claim since at least 2006. The intervening parties have successfully shown that grid costs in fact are responsive to reductions in demand. In addition, moving to a grid charge that creates a “ratchet effect” in revenue requirements where once a utility puts infrastructure in place, it faces no risk for poor investment decisions. On the other hand the utility can place its costs into ratebase and raise rates, which then raises the ratchet level on the fixed charge. One of the most important elements of a market economy that leads to efficient investment is that investors face the risk of not earning a return on an investment. That forces them to make prudent decisions. A “ratcheted” grid charge removes this risk even further for utilities. If we’re claiming that we are creating an “efficient” pricing policy, then we need to consider all sides of the equation.

The point that 50% of rooftop solar generation is used to offset internal use is important–while it may not be exactly like energy efficiency, it does have the most critical element of energy efficiency. That there are additional requirements to implement this is of second order importance, Otherwise we would think of demand response that uses dispatch controls as similarly distinct from EE. Those programs also require additional equipment and different rates. But in fact we sum those energy savings with LED bulbs and refrigerators.

An important element of the remaining 50% that is exported is that almost all of it is absorbed by neighboring houses and businesses on the same local circuit. Little of the power goes past the transformer at the top of the circuit. The primary voltage and transmission systems are largely unused. The excess capacity that remains on the system is now available for other customers to use. Whether investors should be able to recover their investment at the same annual rate in the face of excess capacity is an important question–in a competitive industry, the effective recovery rate would slow.

Finally, public purpose program (PPP) and wildfire mitigation costs are special cases that can be simply rolled up with other utility costs.

  • The majority of PPP charges are a form of a tax intended for income redistribution. That function is admirable, but it shows the standard problem of relying on a form of a sales tax to finance such programs. A sales tax discourages purchases which then reduces the revenues available for income transfers, which then forces an increase in the sales tax. It’s time to stop financing the CARE and FERA programs from utility rates.
  • Wildfire costs are created by a very specific subclass of customers who live in certain rural and wildlands-urban interface (WUI) areas. Those customers already received largely subsidized line extensions to install service and now we are unwilling to charge them the full cost of protecting their buildings. Once the state made the decision to socialize those costs instead, the costs became the responsibility of everyone, not just electricity customers. That means that these costs should be financed through taxes, not rates.

Again, if we are trying to make efficient policy, we need to look at the whole. It is is inefficient to finance these public costs through rates and it is incorrect to assert that there is an inefficient subsidy created if a set of customers are avoiding paying these rate components.

Part 1: A response to “Rooftop Solar Inequity”

Severin Borenstein at the Energy Institure at Haas has plunged into the politics of devising policies for rooftop solar systems. I respond to two of his blog posts in two parts here, with Part 1 today. I’ll start by posting a link to my earlier blog post that addresses many of the assertions here in detail. And I respond to to several other additional issues here.

First, the claims of rooftop solar subsidies has two fallacious premises. First, it double counts the stranded cost charge from poor portfolio procurement and management I reference above and discussed at greater length in my blog post. Take out that cost and the “subsidy” falls substantially. The second is that solar hasn’t displaced load growth. In reality utility loads and peak demand have been flat since 2006 and even declining over the last three years. Even the peak last August was 3,000 MW below the record in 2017 which in turn was only a few hundred MW above the 2006 peak. Rooftop solar has been a significant contributor to this decline. Displaced load means displaced distribution investment and gas fired generation (even though the IOUs have justified several billion in added investment by forecasted “growth” that didn’t materialized.) I have documented those phantom load growth forecasts in testimony at the CPUC since 2009. The cost of service studies supposedly showing these subsidies assume a static world in which nothing has changed with the introduction of rooftop solar. Of course nothing could be further from the truth.

Second TURN and Cal Advocates have all be pushing against decentralization of the grid for decades back to restructuring. Decentralization means that the forums at the CPUC become less important and their influence declines. They have all fought against CCAs for the same reason. They’ve been fighting solar rooftops almost since its inception as well. Yet they have failed to push for the incentives enacted in AB57 for the IOUs to manage their portfolios or to control the exorbitant contract terms and overabundance of early renewable contracts signed by the IOUs that is the primary reason for the exorbitant growth in rates.

Finally, there are many self citations to studies and others with the claim that the authors have no financial interest. E3 has significant financial interests in studies paid for by utilities, including the California IOUs. While they do many good studies, they also have produced studies with certain key shadings of assumptions that support IOUs’ positions. As for studies from the CPUC, commissioners frequently direct the expected outcome of these. The results from the Customer Choice Green Book in 2018 is a case in point. The CPUC knows where it’s political interests are and acts to satisfy those interests. (I have personally witnessed this first hand while being in the room.) Unfortunately many of the academic studies I see on these cost allocation issues don’t accurately reflect the various financial and regulatory arrangements and have misleading or incorrect findings. This happens simply because academics aren’t involved in the “dirty” process of ratemaking and can’t know these things from a distance. (The best academic studies are those done by those who worked in the bowels of those agencies and then went to academics.)

We are at a point where we can start seeing the additional benefits of decentralized energy resources. The most important may be the resilience to be gained by integrating DERs with EVs to ride out local distribution outages (which are 15 times more likely to occur than generation and transmission outages) once the utilities agree to enable this technology that already exists. Another may be the erosion of the political power wielded by large centralized corporate interests. (There was a recent paper showing how increasing market concentration has led to large wealth transfers to corporate shareholders since 1980.) And this debate has highlighted the elephant in the room–how utility shareholders have escaped cost responsibility for decades which has led to our expensive, wasteful system. We need to be asking this fundamental question–where is the shareholders’ skin in this game? “Obligation to serve” isn’t a blank check.

Transmission: the hidden cost of generation

The cost of transmission for new generation has become a more salient issue. The CAISO found that distributed generation (DG) had displaced $2.6 billion in transmission investment by 2018. The value of displacing transmission requirements can be determined from the utilities’ filings with FERC and the accounting for new power plant capacity. Using similar methodologies for calculating this cost in California and Kentucky, the incremental cost in both independent system operators (ISO) is $37 per megawatt-hour or 3.7 cents per kilowatt-hour in both areas. This added cost about doubles the cost of utility-scale renewables compared to distributed generation.

When solar rooftop displaces utility generation, particularly during peak load periods, it also displaces the associated transmission that interconnects the plant and transmits that power to the local grid. And because power plants compete with each other for space on the transmission grid, the reduction in bulk power generation opens up that grid to send power from other plants to other customers.

The incremental cost of new transmission is determined by the installation of new generation capacity as transmission delivers power to substations before it is then distributed to customers. This incremental cost represents the long-term value of displaced transmission. This amount should be used to calculate the net benefits for net energy metered (NEM) customers who avoid the need for additional transmission investment by providing local resources rather than remote bulk generation when setting rates for rooftop solar in the NEM tariff.

  • In California, transmission investment additions were collected from the FERC Form 1 filings for 2017 to 2020 for PG&E, SCE and SDG&E. The Wholesale Base Total Revenue Requirements submitted to FERC were collected for the three utilities for the same period. The average fixed charge rate for the Wholesale Base Total Revenue Requirements was 12.1% over that year. That fixed charge rate is applied to the average of the transmission additions to determine the average incremental revenue requirements for new transmission for the period. The plant capacity installed in California for 2017 to 2020 is calculated from the California Energy Commission’s “Annual Generation – Plant Unit”. (This metric is conservative because (1) it includes the entire state while CAISO serves only 80% of the state’s load and the three utilities serve a subset of that, and (2) the list of “new” plants includes a number of repowered natural gas plants at sites with already existing transmission. A more refined analysis would find an even higher incremental transmission cost.)

Based on this analysis, the appropriate marginal transmission cost is $171.17 per kilowatt-year. Applying the average CAISO load factor of 52%, the marginal cost equals $37.54 per megawatt-hour.

  • In Kentucky, Kentucky Power is owned by American Electric Power (AEP) which operates in the PJM ISO. PJM has a market in financial transmission rights (FTR) that values relieving the congestion on the grid in the short term. AEP files network service rates each year with PJM and FERC. The rate more than doubled over 2018 to 2021 at average annual increase of 26%.

Based on the addition of 22,907 megawatts of generation capacity in PJM over that period, the incremental cost of transmission was $196 per kilowatt-year or nearly four times the current AEP transmission rate. This equates to about $37 per megawatt-hour (or 3.7 cents per kilowatt-hour).