Jeff McMahon at Forbes wrote a nice two-part series on the existential decisions that utilities face going forward. Part 1 is here, and Part 2 here. I posted earlier a longer article from the New Yorker looking at the changing landscape.
The CitiGPS study makes a unique contribution to the climate change risk literature: reducing GHG emissions will lead to stranded investment assets. These assets include both fossil fuel holdings and the equipment that uses those fuels. Protecting those investments is at the heart of much of the resistance to addressing climate change risk. Removing political barriers is probably the single greatest difficultly in moving to implement policies to mitigate this risk; many policy proposals are at the ready so there’s no lack there. Given the apparent urgency of acting, perhaps it’s time to ask the question whether these asset owners should be compensated by those who will benefit directly, i.e., the rest of us?
What’s behind the reluctance of political actors to propose this type of solution is the belief in the underlying premise of benefit-cost analysis. Economists have unfortunately perpetuated a misconception on the public that so long as total societal benefits exceed costs, a policy is justified even if those suffering those costs are not compensated for their losses. The basis of this is the Kaldor-Hicks efficiency criterion. In contrast, market transactions are presumed to only occur if both parties gain through Pareto efficiency--one party fully compensates the other one for the transaction. Public policy now casts aside this compensation requirement. Unfortunately this leads to significant redistribution impacts that are too often left unexamined. And of course, the losers resist these policies, with a ferocity that is accentuated by both loss aversion (where potential losses are felt more strongly than gains) and that these losses are usually concentrated among a smaller group of individuals than the spread of the benefits.
Too often public agencies are running over these interests to push for societal benefits without compensating the losers. A recent example that I was involved with was the adoption by the California Air Resources Board of the in-use off-road diesel engine regulations. CARB mandated the premature scrappage of construction equipment that had been purchased to comply with previous regulatory mandates from CARB and the US EPA. CARB claimed societal air quality benefits of $13 billion at the cost of $3 billion to the construction industry. Yet CARB never proposed to pay the owners of the equipment for their lost investments. GHG regulation is proceeding down the same path.
If the benefits truly justify adopting a policy, and GHG reductions certainly appear to meet that criterion, then society should be willing to compensate those who made investments under the previous policy environment that endorsed those investments. Certainly there’s questions about whether those investors truly had property rights in the resources they used, but that issue should be addressed directly, not as an implicit assumption that no such property rights ever existed. (This question about property rights has been raised in regulating California’s water use.) Too often policy proponents conflate a goal of an improved environment with goals to redistribute wealth. By jumping over the property rights question, wealth also can be redistributed implicitly. Societal equity issues are important, but they shouldn’t be achieved through backdoor measures that make all of us worse off. Requiring politicians and bureaucrats to consider the actual cost of their policy proposals will make us all better off, and maybe even remove obstacles to a better environment.
A look at how commercial and institutional building energy use can be reduced by providing price signals.
Severin Borenstein at the Energy Institute at Haas blogged about the debate over moving to residential fixed charges, and it has started a lively discussion. I added my comment on the issue, which I repost here.
The question of recovery of “fixed” costs through a fixed monthly charge raises a more fundamental question: Should we revisit the question of whether utilities should be at risk for recovery of their investments? As is stands now if a utility overinvests in local distribution it faces almost no risk in recovering those costs. As we’ve seen recently demand has trended well below forecasts since 2006 and there’s no indication that the trend will reverse soon. I’ve testified in both the PG&E and SCE rate cases about how this has led to substantial stranded capacity. Up to now the utilities have done little to correct their investment forecasting methods and continue to ask for authority to make substantial “traditional” investment. Shareholders suffer few consequences from having too much distribution investment–this creates a one-sided incentive and it’s no surprise that they add yet more poles and wire. Imposing a fixed charge instead of including it as a variable charge only reinforces that incentive. At least a variable charge gives them some incentive to avoid a mismatch of revenues and costs in the short run, and they need to think about price effects in the long run. But that’s not perfect.
When demand was always growing, the issue of risk-sharing seemed secondary, but now it should be moving front and center. This will only become more salient as we move towards ZNE buildings. What mechanism can we give the utilities so that they more properly balance their investment decisions? Is it time to reconsider the model of transferring risk from shareholders to ratepayers? What are the business models that might best align utility incentives with where we want to go?
The lesson of the last three decades has been that moving away from direct regulation and providing other outside incentives has been more effective. Probably the biggest single innovation that has been most effective has been imposing more risk on the providers in the market.
California has devoted as many resources as any state to trying to get the regulatory structure right–and to most of its participants, it’s not working at the moment. Thus the discussion of whether fixed charges are appropriate need to be in the context of what is the appropriate risk sharing that utility shareholders should bear.
This is not a one-side discussion about how groups of ratepayers should share the relative risk among themselves for the total utility revenue requirement. That’s exactly the argument that the utilities want us to have. We need to move the argument to the larger question of how should the revenue requirement risk be shared between ratepayers and shareholders. The answer to that question then informs us about what portion of the costs might be considered unavoidable revenue responsibility for the ratepayers (or billpayers as I recently heard at the CAISO Symposium) and what portion shareholders will need to work at recovering in the future. As such the discussion has two sides to it now and revenue requirements aren’t a simple given handed down from on high.
Koichiro Ito again has used a discrete event to develop a “control” for an economic experiment. In this case, he has studied PG&E’s 20/20 rebate program in 2004. The “event” he uses is the eligibility date for the program–he uses new customers who connected to service just before and after that date. He finds that the program had almost no effect on coastal customers but that it was effective in reducing energy use for low-income inland consumers.
Previously, he had looked whether tiered-block rates were better at inducing conservation across the entire pool of customers. The final version of his paper was published February in the American Economic Review. Discerning the true effects of tiered-rates has been very difficult due to the endogeneity problem–consumers essentially set their own marginal price by choosing their consumption level. Many studies have been conducted in both water and electricity trying to tease out this effect, but the results have always been questionable for this reason.
Ito was able to use two key facts in his latter study: 1) the 2001 California electricity crisis caused rates to rise rapidly and 2) the SCE and SDG&E service areas are closely interlocked across similar communities in southern Orange County. He was able to run an after-the-fact experiment with two treatment groups that had similar socio-economics and were exposed to the same media market. It’s as if two groups of customers were presented with two different sets of rates from the same utility–a truly unique situation that probably can’t be duplicated. He found that the tiered rates induced no more change in energy use than simple average rates.
These well-done studies can cause policymakers to ask whether complicated proposals that seem to mitigate various concerns are truly effective. In these two cases, the answers are largely “no”.
Rory Christian of EDF has written about using performance-based ratemaking “+” (PBR+) in New York’s Reforming the Energy Vision proceeding. EDF, in taking an important step for an environmental advocate, recognizes the importance of providing the right economic incentives for market participants to achieve environmental goals. Prescriptive solutions too often are misguided and inflexible leading to failure and high costs.
That said, PBR+ may not be the best solution (and I don’t have the immediate answer to this question.) PBR hasn’t had a great track record in California. Diablo Canyon suffered from excessive costs that led to the push for restructuring. The competitive transition charge (CTC) opened the door for market manipulation. And the CPUC couldn’t say “no” when it awarded incentives for questionable energy efficiency gains. Other jurisdictions have had mixed results. Mechanism design is critically important to make PBR work.
Taking a step back from specific policy proposals, an important perspective to consider is that the “regulated utility” is not the same as “utility shareholders.” Shareholders are the true stakeholders in the discussion about the new utility business model. (Utility managers may hijack that role but that probably is not a sustainable position.) So we should be looking outside the box of standard regulatory tools, even PBRs, and ask “how else can utility shareholders see value from the electricity industry outside of their regulated utility affiliate?” There are potential models for alternative approaches that might ease the political and economic transition to the new energy future.
Chuck Goldman at Lawrence Berkeley National Lab made a presentation on the various business model options that are available. The Energy Services Utility (ESU) is an option that deserves greater exploration, particularly in concert with a distributed system operator (DSO). An ESU might provide a model for utility holding company shareholders to participate. But the devil could be in the details.
Improvement in new and existing technologies’ performance and costs is a function of responses to a mix of market and regulatory signals. Finding empirical measures of differing innovation influences is difficult due to confounding influences. Yet we may be able to look at broader economic trends to discern the relative merit of different approaches.
The most salient example could be the assessment of comparative performances after the fall of the Berlin Wall. The Allies conducted a 45-year experiment in which Germany was first split after World War II with largely equivalent cultures and per capita endowments, but one used a largely market-based economy and the other relied on central economic planning. When the two nations reunited in 1990, the eastern centrally-planned portion was significantly behind in both overall well-being and in technological innovations and adoption. West Germany had doubled the economic output of centrally-planned East Germany.
More importantly, West Germany had become one of the most technologically-advanced and environmentally-benign economies while East Germany was still reliant on dirty, obsolete technologies. For example, a coal-to-oil refinery in the former East Germany was still using World War II-era technology. West Germany’s better environmental situation probably arose from the fact that firms and the government were in an adversarial setting in which the firms focused on the most efficient use of resources and were insulated from political interest group pressures. On the other hand, resource allocation decisions in East Germany had to also consider interest group pressures that tended to protect old technologies and industries because these were state-owned enterprises.
The transformation of the West German economy, both technologically and institutionally, was akin to what we will need to meet current GHG reduction goals and beyond. This more clearly than any other example demonstrates how reliance on central planning, as attractive as it appears to achieving specific goals, can be overwhelmed by the complexity of our societies and economies. Despite explicit policies to pursue technological innovations, a market-based system progressed much more rapidly and further.