Category Archives: Energy innovation

Emerging technologies and institutional change to meet new challenges while satisfying consumer tastes

Using an event to measure energy savings program effectiveness (again)

Koichiro Ito again has used a discrete event to develop a “control” for an economic experiment. In this case, he has studied PG&E’s 20/20 rebate program in 2004. The “event” he uses is the eligibility date for the program–he uses new customers who connected to service just before and after that date. He finds that the program had almost no effect on coastal customers but that it was effective in reducing energy use for low-income inland consumers. 

Previously, he had looked whether tiered-block rates were better at inducing conservation across the entire pool of customers. The final version of his paper was published February in the American Economic Review. Discerning the true effects of tiered-rates has been very difficult due to the endogeneity problem–consumers essentially set their own marginal price by choosing their consumption level. Many studies have been conducted in both water and electricity trying to tease out this effect, but the results have always been questionable for this reason. 

Ito was able to use two key facts in his latter study: 1) the 2001 California electricity crisis caused rates to rise rapidly and 2) the SCE and SDG&E service areas are closely interlocked across similar communities in southern Orange County. He was able to run an after-the-fact experiment with two treatment groups that had similar socio-economics and were exposed to the same media market. It’s as if two groups of customers were presented with two different sets of rates from the same utility–a truly unique situation that probably can’t be duplicated. He found that the tiered rates induced no more change in energy use than simple average rates.

These well-done studies can cause policymakers to ask whether complicated proposals that seem to mitigate various concerns are truly effective. In these two cases, the answers are largely “no”. 

What are the missing questions in California’s distribution planning OIR?

The CPUC has opened a long awaited rulemaking to revisit (or maybe visit for the first time!) how utilities should plan their distribution investments to better integrate with distributed energy resources (DER). State law now requires the utilities to file distribution plans by next July. But the CPUC may want to consider some deeper questions while formulating its policies.

To date the utilities have pretty much been able to make such investments with little oversight. For one client, AECA, we submitted testimony pointing out that PG&E had consistently overforecasted demand and used that demand to justify new distribution investment that probably is unneeded. Based on a corrected forecast that recognizes that that PG&E’s (and the state’s) demand has turned downward since 2007, PG&E’s loads don’t return to 2007 levels until at least 2014. (We found a similar pattern in SCE’s 2012 GRC filings.)

 

AECA - PG&E 2014 GRC Testimony: Comparing Demand Forecasts

AECA – PG&E 2014 GRC Testimony: Comparing Demand Forecasts

Both PG&E and SCE justified new investment based on phantom load growth, but they would have been better served to show what investment might be required for the evolving electricity market. SCE has responded with the Living Pilot that tests out how to best integrate preferred resources.

The CPUC is relying on Paul De Martini’s More than Smart paper as a roadmap for the rulemaking. The CPUC has asked a number of questions to be addressed by September 4 with replies September 17. A workshop is to be held September 18.Beyond these questions, two more questions come to mind.

First, who will be allowed to play in the DER world? The OIR asks about non-IOU ownership of distribution lines, particularly related to microgrids, but it doesn’t consider the flip side–can utilities or affiliates participate in the DER market? Setting market rules in the face of rapid evolution and uncertainty, current participants will look to protect their current interests unless they are shown a clear opportunity to gain the benefits of a new market. The CPUC ignores the political economy of rulemaking at our risk.

The second is how is this proceeding to be integrated with the multitude of other proceedings at the CPUC that set various resource targets? The LTPP, energy efficiency, demand response and solar initiatives, along with others, all seem to run on parallel tracks with little in the way of interactive feedback. Megawatt targets seem to be set arbitrarily with little evaluation of comparative resource costs and effectiveness, and more importantly, how these resources might best integrate with each other. How are the utilities to adapt to the spread of DER if the CPUC hasn’t considered how much DER might be installed?

Both of these questions are about market functionality. Who are the likely participants? What are their incentives to act in different situations? How would the CPUC prefer that then act? How are price signals to be coordinated to create the preferred incentives? The system investment and operation rules are a necessary component of anticipating the market evolution, but they are not sufficient. California ignored the incentives of market participants in the previous restructuring experiment, at the cost of $20 to $40 billion. We should take heed of what we’ve learned from the past about the paradigm we should use to approach this impending change.

Looking beyond performance based ratemaking in New York’s Utility 2.0

Rory Christian of EDF has written about using performance-based ratemaking “+” (PBR+) in New York’s Reforming the Energy Vision proceeding. EDF, in taking an important step for an environmental advocate, recognizes the importance of providing the right economic incentives for market participants to achieve environmental goals. Prescriptive solutions too often are misguided and inflexible leading to failure and high costs.

That said, PBR+ may not be the best solution (and I don’t have the immediate answer to this question.) PBR hasn’t had a great track record in California. Diablo Canyon suffered from excessive costs that led to the push for restructuring. The competitive transition charge (CTC) opened the door for market manipulation. And the CPUC couldn’t say “no” when it awarded incentives for questionable energy efficiency gains. Other jurisdictions have had mixed results. Mechanism design is critically important to make PBR work.

Taking a step back from specific policy proposals, an important perspective to consider is that the “regulated utility” is not the same as “utility shareholders.” Shareholders are the true stakeholders in the discussion about the new utility business model. (Utility managers may hijack that role but that probably is not a sustainable position.) So we should be looking outside the box of standard regulatory tools, even PBRs, and ask “how else can utility shareholders see value from the electricity industry outside of their regulated utility affiliate?” There are potential models for alternative approaches that might ease the political and economic transition to the new energy future.

Chuck Goldman at Lawrence Berkeley National Lab made a presentation on the various business model options that are available. The Energy Services Utility (ESU) is an option that deserves greater exploration, particularly in concert with a distributed system operator (DSO). An ESU might provide a model for utility holding company shareholders to participate. But the devil could be in the details.

Guest Post: The importance of engaging electricity consumers

My partner at M.Cubed Steven Moss wrote this editorial for The Potrero View on how we need to engage consumers when developing a vision of how the electricity future might evolve:

Multiple corporate monopolies have emerged, thrived, and withered over the last hundred years. Railroads, telegram and telephone services, air transportation, network television and newspapers all had highly lucrative heydays, but were ultimately cut down to size by a combination of government anti-trust activities and new technologies. Today there’s a plethora of transportation, communication, information, and entertainment services, most offered at lower cost or with greater value than what was on the former cartels’ menu.
The societal conversation continues over how to best manage quasi-monopolies, like cable and Internet services. Water utilities are struggling with how to pay for themselves in an era in which reducing consumption is essential to addressing chronic scarcity. But the monopoly sector most ripe for rapid change is the almost a half-trillion dollar electricity sector.
Throughout the U.S. electricity is provided by a mix of municipal, cooperative, and investor-owned utilities (IOUs), each with a lock on delivering large aspects of the service in their home territories. In California the three large IOUs — San Diego Gas and Electric, Southern California Edison, and Pacific Gas and Electric (PG&E) — have carved up the lion’s share of the state’s monopoly electricity market. All of them face a business model that’s been buffeted by the rapid policy-driven onsite of renewables and the emergence of other technologies that aren’t as dependent on a large, capital-intensive spoke — fossil fuel or nuclear power plant — and wheels — transmission and distribution — system to operate.
Today, a home or business can install devices to capture sunshine or wind and cope with intermittent power flows by managing the timing of their energy consumption and installing a storage device, which could include harnessing the battery in the electric vehicle parked in the garage. These types of systems may work best when they’re combined at the multiple-neighborhood level, to create a portfolio of resources that can reduce the risk that the failure of one device will have catastrophic outage consequences. The optimal size for a next generation grid may be roughly half the size of San Francisco, a back-to-the-future system that mirrors the more than 100 small service providers that combined more than a century ago to create PG&E.
Institutional change is tricky, though, when it comes to electricity. Although rates are high in California, outside the Central Valley in the summer, household bills are generally modest as a result of the state’s mild climate. There’s solid service reliability, with the IOUs generally doing a fine job restoring post-storm outages. And, thanks to public policies, low-income families are provided substantial subsidies, while the grid has grown increasingly green. Outside San Francisco — and post natural gas-disaster San Bruno — where tilting at PG&E is an ideological battle rather than an economic one, these characteristics serve to mute the potential for widespread ratepayer revolt, and encourage consumer advocacy groups to protect the existing monopoly system.
Yet without change, electricity service is poised to get much more expensive, and probably less green. Renewable intermittency — production drops when the sun doesn’t shine — doesn’t match with the current system, creating gaps that could be plugged by costly and polluting fossil fuel power plants, eroding much of the environmental gains achieved over the past decade. Despite substantial technological innovation which should spur price competition, utility rates are consistently rising, in part because two competing paradigms — New Age renewables, and Industrial Age fossil fuels — are being simultaneously pursued for political reasons.
The seeds of a solution are in creating more knowledge. Consumers are almost entirely ignorant of how the timing of their electricity use influences costs. Electricity rates don’t reflect the underlying expense — to the environment or grid — of providing service in a given time and place. Since price-based feedback to the IOUs is significantly muted, the monopolies operate as if demand is largely immune to change, and must be met by increasing amounts of generation to ensure reliability.
The pathways we take as the grid wobbles in the face of renewable disruption will determine how much we pay, out of our pockets, and through dirtier air, for the next few decades. Fortunately, there’s a ready way to remold the monopoly electric utility industry: get the prices right. If rates reflected the true costs of service — including greenhouse gas and polluting air emissions — consumers and businesses would take action to change their consumption patterns, aided by high technology companies eager to solve profitable problems. The Internet of Things would become the Energy System of Things, with renewables, storage, and a host of communicating devices connected to optimize energy use in an environmental sustainable way.
Offering transparent electricity prices won’t solve all of the grid’s challenges. But not doing so walls off essential innovation. Renewables and emerging technologies, combined with clever tariffs, could help ensure that California never builds another fossil fuel power plant. The state can protect low-income households from onerous electricity bills, by directly paying for energy efficiency investments, or providing bill credits. A small is beautiful ethos can emerge to rival the large, reliable, monopolies in providing high-quality services. If we get the prices right.

Identifying the barriers to transportation fuel diversity

Tim O’Connor of EDF writes about the benefits of transportation diversification at EDF’s California Dream 2.0. I think that fuel diversity is a useful objective, but achieving that will be difficult due to the network externalities inherent in transportation technologies. Gasoline and diesel vehicles became dominant because having single-fuel refueling networks is more cost effective for both vendors and customers, and reduce the search costs for drivers to find those stations. Think of how many fueling stations someone might have to pass to reach their particular energy source. Investing in a particular fuel requires a certain level of revenue. Note how many local gas stations have closed because they didn’t have enough sales.

For a more recent example, we can look at cell phone operating systems. Initially each manufacturer had their own system, but now virtually all phones are driven by two systems, Android and iOS, while Windows 8 keeps trying to make inroads.

We need to be very aware of the fueling network economics when pushing for new transportation energy sources. Investing in a system is as much a set of business decisions as a policy decision. One approach might be to focus on using particular fuels in a narrow set of sectors and discourage broad sector-wide use. Another might be to use a geographic focus and to set up means of interconnecting across those geographies.

Distribution system operator rising

Two recent papers propose a new approach to managing the distribution grid by creating a “distribution system operator” (DSO). The DSO would control the local low-voltage grid between the substations and the customers’ meters, much as the independent system operators (e.g., CAISO, PJM, MISO, NEISO, NYISO) run the high-voltage transmission grid above the substations. The transmission and distribution system would be run as an open-access system, much as how many natural gas utilities are run now.

Lorenzo Kristov and Paul De Martini have written about this approach, focusing on the technical issues. They are agnostic on ownership, and talking with Kristov (frequently) he sees that the DSO can be either owned by the existing utility or spun off.

Former FERC Chair Jon Wellinghoff and James Tong of Clean Power Finance have addressed the ownership / management issue, proposing that the DSO be independent. They also have proposed that regulated utilities be allowed to own distributed generation on the customer side of the meter.

An important issue yet to be addressed in the creation of (I)DSOs though is transition and sustainability. The creation of ISOs has been politically traumatic, and creating IDSOs will face even more risk-averse political opposition, particularly in the West, after the energy debacle of 2000-01. We’ve also seen that ISOs are not particularly cost sensitive because they are largely insulated from direct cost regulation of the capital assets that they manage (a classic “agency” problem.) Since transmission is such a small portion of overall rates, the ISOs have been able to fly under the radar–but that may change soon.

Finally, it’s not clear how shareholders will view the change in asset ownership, management and returns. I wrote about this previously in the emergence of the “peer to peer” economy. Ensuring that shareholders don’t lose substantial value, even as the risk profile changes, will be key to easing the political process. There are alternative models for easing the asset management transition that is not threatening to current shareholders. There are better models than simply relying on regulated utilities to essentially do more of the same. Market forces are important in driving the innovation needed to transition the electricity system.  More on that another time.

Repost: Californians Can Handle the Truth About Gas Prices

Sev Borenstein writes about the two sides of the argument on whether transportation fuels should be rolled into the cap-and-trade program in January 2015.

I have an observation that that has only been alluded to indirectly in the debate. The main point of the legislators’ letter calling for a delay in implementation is that low income groups may be particularly hit. The counter argument that we need the inclusion of transportation fuels under the cap to incent innovation seems to pit the plight of the poor against the investment risk of wealthy entrepreneurs. We haven’t really done a good job of addressing affordability of the transformative policies that can change GHG emissions. The proposal to use carbon tax revenues to rebate to low income taxpayers has been floated at the national level, but of course that died with the rest of the national cap and trade proposal. A similar proposal was made to mitigate electricity price impacts.

Our state legislators are rightfully concerned about the impacts on those among us who have the least. Nevertheless, that problem is easily addresses with the tools and resources that are already available to the state. Those families and households who now qualify for the CARE and FERA electric and natural gas utilities rate discounts can be made eligible for an annual rebate equal to the average annual gasoline consumption multiplied by the amount of the GHG allowance cost embedded in the gasoline price. This rebate could be funded out of the state’s allowance revenue fund. For example, if the price is increased by 15 cents per gallon and the average automobile uses 650 gallons per year, an eligible household could receive $97.50 for each car.

About 30% of households are currently eligible for CARE or FERA. On a statewide basis, the program would cost about $650 million, which is comparable to the cost for CARE for a single utility like PG&E or Southern California Edison. Those legislators who are most concerned can coauthor legislation to put this program in place.

(BTW, I think the DOE fuel use calculator is outdated–on my many trips to LA I haven’t seen these types of fuel economy changes. My average MPG is pretty much the same no matter how much traffic there is on I-5.  But that’s just a fun fact aside…)

What is the true price for renewable energy power?

The renewable energy market has been in upheaval since the collapse of the financing sector in 2008. The withdrawal of easy money and uncertainty over federal tax policy has increased perceived risk.  Large firms have been shedding renewables subsidiaries and promising newcomers have dropped high-profile projects. Waste Management just sold Wheelabrator, exiting the waste-to-energy market. Brightsource suspended its Hidden HIlls solar thermal project. Much of this activity is driven by the perception that wholesale electricity market prices are falling and the underlying fundamentals will lead to further declines.

This perception is misplaced, however. Short run electricity market prices are falling as natural gas becomes cheaper, and more importantly, fossil fuel generation is squeezed out by increasing renewables and falling demand. However, the electricity marketplace hasn’t yet adjusted to the fact that natural gas generation is no longer the only marginal generation resource. In California, the renewables portfolio standard (RPS) makes at least 33% of the marginal generation from renewable resources. When capital costs are correctly figured in, and more long-term contracts are offered to match those deferred resources, power purchase agreement (PPA) prices for the right types of resources should increase, not decrease.

The problem is that the industry hasn’t been able to adjust its procurement model to reflect this new reality. I think this is coming from a combination of utilities continuing to maintain their monopsony (single buyer) position, risk averse regulatory agencies still relying on an obsolete procurement regulatory process, and those agencies enforcing the monopsony power of the utilities in the name of protecting ratepayers. This may not change until there is public acknowledgement that this situation exists. The difficulty is finding the right stakeholders with enough sway to raise the issue.

Repost: California Dream – How Big Data Can Fight Climate Change in Los Angeles

EDF and UCLA have created an interesting visual presentation on the potential for solar power and energy savings in the LA county, overlaid with socio-economic characteristics. (But I have some trouble with the representation of a few West LA communities as disadvantaged with high health risk–is that the UCLA campus?

What we might expect for diffusion of new decentralized energy technologies?

Technologies and policies that enhance the development of decentralized energy resources have generated increasing interest over the last couple of years.  I’ll write more in the future about what are the underlying drivers, both technological and institutional.

I’ve been interested in the question of where do we stand, and how long might it take for diffusion of these new technologies. We can look back and see how technology transformed lives in just a couple of decades. Compare kitchens from 1900 and 1930; if we walked into the earlier kitchen, most of us would be lost, but we could whip up a meal in 1930.

 1030's Kitchen; Photo Credit - Henry Ford Museum

Or the rapid adoption of autos. In 1909, people could stand in the middle of Pike Street in Seattle and talk:

File:Seattle - Pike Street 1909.jpg

Not so safely in 1930:

Do we stand today at a point just at the onset of a new technological evolution?

One question to be answered is whether our institutional settings will allow these new technologies. In one case, it appears that Germany has already chosen its road. But in the US, whether we rely on central power stations using transmission lines may still be a question in play. That deserves a separate post of its own.

If we assume that we choose the decentralized path, what might we expect in when these technologies are adopted widely. A couple of graphics illustrate historic diffusion rates. This is one from VisualEconomics via The Atlantic:

Another one from Forbes via The Technium shows the parallel development paths (however, I don’t like starting at the year of invention instead of a threshold adoption level):

One might interpret the upper graph as showing accelerating adoption rates. But I interpret the lower chart as illustrating at least two factors that drive diffusion: the relative importance of network infrastructure and the expense relative to individual wealth.  Autos, telephones and electricity all required construction of a large network of roads or wires, often funded with public investment. Individuals can’t choose to adopt the technology until a larger public decision is made to facilitate that adoption.  As to expense, refrigerators and dishwashers were large household investments for many years, and cars are still a large single expenditure. On the other hand, cell phones, radios and televisions quickly became inexpensive which lubricated diffusion. We need another graphic showing how diffusion rates relate to these two different axes.

We are still unsure where decentralized energy technologies will fall among these characteristics. They may seem small and inexpensive, but enough solar panels to power a house will still be several thousand dollars for the foreseeable future. And the how much electric network investment is required to integrate these resources is the center of the debate over technology policies.

Too often studies making forecasts and policy recommendations don’t consider what adoption rates are feasible or probable. However a study comes along and incorporates this concept as its centerpiece. A good example is the Clean Energy Vision Project’s Western Grid 2050 report. Lead by a former colleague Carl Linvill, who’s now at the Regulatory Assistance Project, it looked at several different scenarios for technology diffusion. Such studies give us a better understanding of what’s actually possible rather than what we wish for.