Tag Archives: greenhouse gases

Modern climate change is now 27 times faster than historic global warming mass extinction events

Steve Hampton has updated his original analysis from 2019 when he worked at the California Department of Fish and Wildlife as an economist. The warming rate has now increased to 27 times any previous event. This chart is sobering for anyone who believes that the current warming is part of a natural cycle. This points to a potentially catastrophic result. Steve wrote recently “it’s now about 18x, not 10x, faster than the other fastest warming.”

How to properly calculate the marginal GHG emissions from electric vehicles and electrification

Recently the questions about whether electric vehicles increase greenhouse gas (GHG) emissions and tracking emissions directly to generation on a 24/7 basis have gained saliency. This focus on immediate grid-created emissions illustrates an important concept that is overlooked when looking at marginal emissions from electricity. The decision to consume electricity is more often created by a single large purchase or action, such as buying a refrigerator or a new electric vehicle, than by small decisions such as opening the refrigerator door or driving to the grocery store. Yet, the conventional analysis of marginal electricity costs and emissions assumes that we can arrive at a full accounting of those costs and emissions by summing up the momentary changes in electricity generation measured at the bulk power markets created by opening that door or driving to the store.

But that’s obviously misleading. The real consumption decision that created the marginal costs and emissions is when that item is purchased and connected to the grid. And on the other side, the comparative marginal decision is the addition of a new resource such as a power plant or an energy efficiency investment to serve that new increment of load.

So in that way, your flight to Boston is not whether you actually get on the plane, which is like opening the refrigerator door, but rather your purchase of the ticket which led to the incremental decision by the airline to add another scheduled flight. It’s the share of the fuel use for that added flight which is marginal, just as buying a refrigerator is responsible for the share of the energy from the generator added to serve the incremental long-term load.

There are growing questions about the use of short run market prices as indicators of market value of generation assets for a number of reasons. This paper critiquing “surge” pricing on the grid has one set of aspects that undermine that principle.

Meredith Fowley at the Energy Institute at Haas compared two approaches to measuring the additional GHG emissions from a new electric vehicle. The NREL paper uses the correct approach of looking at longer term incremental resource additions rather than short run operating emissions. The hourly marginal energy use modeled by Holland et al (2022) is not particularly relevant to the question of GHG emissions from added load for several reasons and for that reason any study that doesn’t use a capacity expansion model will deliver erroneous results. In fact, you will get more accurate results from relying on a simple spreadsheet model using capacity expansion than a complex production cost hourly model.

In the electricity grid, added load generally doesn’t just require increased generation from existing plants, but rather it induces investment in new generation (or energy savings elsewhere, which have zero emissions) to meet capacity demands. This is where economists make a mistake thinking that the “marginal” unit is additional generation from existing plants–in a capacity limited system such as the electricity grid, its investment in new capacity.

That average emissions are falling as shown in Holland et al while hourly “marginal” emissions are rising illustrates this error in construction. Mathematically that cannot be happening if the marginal emission metric is correct. The problem is that Holland et al have misinterpreted the value they have calculated. It is in fact not the first derivative of the average emission function, but rather the second partial derivative. That measures the change in marginal emissions, not marginal emissions themselves. (And this is why long-run marginal costs are the relevant costing and pricing metric for electricity, not hourly prices.) Given that 75% of new generation assets in the U.S. were renewables, it’s difficult to see how “marginal” emissions are rising when the majority of new generation is GHG-free.

The second issue is that the “marginal” generation cannot be identified in ceteris paribus (i.e., all else held constant) isolation from all other policy choices. California has a high RPS and 100% clean generation target in the context of beneficial electrification of buildings and transportation. Without the latter, the former wouldn’t be pushed to those levels. The same thing is happening at the federal level. This means that the marginal emissions from building decarbonization and EVs are even lower than for more conventional emission changes.

Further, those consumers who choose beneficial electrification are much more likely to install distributed energy resources that are 100% emission free. Several studies show that 40% of EV owners install rooftop solar as well, far in excess of the state average, (In Australia its 60% of EV owners.) and they most likely install sufficient capacity to meet the full charging load of their EVs. So the system marginal emissions apply only to 60% of EV owners.

There may be a transition from hourly (or operational) to capacity expansion (or building) marginal or incremental emissions, but the transition should be fairly short so long as the system is operating near its reserve margin. (What to do about overbuilt systems is a different conversation.)

There’s deeper problem with the Holland et al papers. The chart that Fowlie pulls from the article showing that marginal emissions are rising above average emissions while average emissions are falling is not mathematically possible. (See for example, https://www.thoughtco.com/relationship-between-average-and-marginal-cost-1147863) For average emissions to be falling, marginal emissions must be falling and below average emissions. The hourly emissions are not “marginal” but more likely are the first derivative of the marginal emissions (i.e., the marginal emissions are falling at a decreasing rate.) If this relationship holds true for emissions, that also means that the same relationship holds for hourly market prices based on power plant hourly costs.

All of that said, it is important to incentivize charging during high renewable hours, but so long as we are adding renewables in a manner that quantitatively matches the added EV load, regardless of timing, we will still see falling average GHG emissions.

It is mathematically impossible for average emissions to fall while marginal emissions are rising if the marginal emission values are ABOVE the average emissions, as is the case in the Holland et al study. What analysts have heuristically called “marginal” emissions, i.e., hourly incremental fuel changes, are in fact, not “marginal”, but rather the first derivative of the marginal emissions. And as you point out the marginal change includes the addition of renewables as well as the change in conventional generation output. Marginal must include the entire mix of incremental resources. How marginal is measured, whether via change in output or over time doesn’t matter. The bottom line is that the term “marginal” must be used in a rigorous economic context, not in a casual manner as has become common.

Often the marginal costs do not fit the theoretical mathematical construct based on the first derivative in a calculus equation that economists point to. In many cases it is a very large discreet increment, and each consumer must be assigned a share of that large increment in a marginal cost analysis. The single most important fact is that for average costs to be rising, marginal costs must be above average costs. Right now in California, average costs for electricity are rising (rapidly) so marginal costs must be above those average costs. The only possible way of getting to those marginal costs is by going beyond just the hourly CAISO price to the incremental capital additions that consumption choices induce. It’s a crazy idea to claim that the first 99 consumers have a tiny marginal cost and then the 100th is assigned the responsibility for an entire new addition such as another flight scheduled or a new distribution upgrade.

We can consider the analogy to unit commitment, and even further to the continuous operation of nuclear power plants. The airline scheduled that flight in part based on the purchase of the plane ticket, not on the final decision just before the gate was closed. Not flying saved a miniscule amount of fuel, but the initial scheduling decision created the bulk of the fuel use for the flight. In a similar manner a power plant that is committed several days before an expected peak load burns fuels while idling in anticipation of that load. If that load doesn’t arrive, that plant avoids a small amount of fuel use, but focusing only on the hourly price or marginal fuel use ignores the fuel burned at a significant cost up to that point. Similarly, Diablo Canyon is run at a constant load year-round, yet there are significant periods–weeks and even months–where Diablo Canyon’s full operational costs are above the CAISO market clearing price average. The nuclear plant is run at full load constantly because it’s dispatch decision was made at the moment of interconnection, not each hour, or even each week or month, which would make economic sense. Renewables have a similar characteristic where they are “scheduled and dispatched” effectively at the time of interconnection. That’s when the marginal cost is incurred, not as “zero-cost” resources each hour.

Focusing solely on the small increment of fuel used as a true measure of “marginal” reflects a larger problem that is distorting economic analysis. No one looks at the marginal cost of petroleum production as the energy cost of pumping one more barrel from an existing well. It’s viewed as the cost of sinking another well in a high cost region, e.g., Kern County or the North Sea. The same needs to be true of air travel and of electricity generation. Adding one more unit isn’t just another inframarginal energy cost–it’s an implied aggregation of many incremental decisions that lead to addition of another unit of capacity. Too often economics is caught up in belief that its like classical physics and the rules of calculus prevail.

A Residential Energy Retrofit Greenhouse Gas Emission Offset Reverse Auction Program

In most local California jurisdictions, the largest share of stationary emissions will continue to come from the existing buildings. On the other hand, achieving zero net energy (ZNE) or zero net carbon (ZNC) for new developments can be cost prohibitive, particularly if incremental transportation emissions are included. A Residential Retrofit Offset Reverse Auction Program (Retrofit Program) aims to balance emission reductions from both new and existing buildings s to lower overall costs, encourage new construction that is more energy efficient, and incentivize a broader energy efficiency marketplace for retrofitting existing buildings.

The program would collect carbon offset mitigation fees from project developers who are unable to achieve a ZNE or ZNC standard with available technologies and measures. The County would then identify eligible low-income residential buildings to be targeted for energy efficiency and electrification retrofits. Contractors then would be invited to bid on how many buildings they could do for a set amount of money.

The approach proposed here is modeled on the Audubon Society’s and The Nature Conservacy’s BirdReturns Program.[1] That program contracts with rice growers in the Sacramento Valley to provide wetlands in the Pacific Flyway. It asks growers to offer a specified amount of acreage with given characteristics for a set price–that’s the “reverse” part of the auction.

A key impediment to further adoption of energy efficiency measures and appliances is that contractors do not have a strong incentive to “upsell” these measures and products to consumers. In general, contractors pass through most of the hardware costs with little markup; their profits are made on the installation and service labor. In addition, contractors are often asked by homeowners and landlords to provide the “cheapest” alternative measured in initial purchase costs without regard to energy savings or long-term expenditures.

The Retrofit Program is intended to change the decision point for contractors to encourage homeowners and landlords to implement upgrades that would create homes and buildings that are more energy efficient. Contractors would bid to install a certain number of measures and appliances that exceed State and local efficiency standards in exchange for payments from the Retrofit Program. The amount of GHG reductions associated with each type of measure and appliance would be predetermined based on a range of building types (e.g., single-family residential by floor-size category, number of floors, and year built). The contractors can use the funds to either provide incentives to consumers or retain those funds for their own internal use, including increased profits. Contractors may choose to provide more information to consumers on the benefits of improved energy efficiency as a means of increasing sales. Contractors would then be compensated from the Offset Program fund upon showing proof that the measures and appliances were installed. The jurisdiction’s building department would confirm the installation of these measures in the normal course of its permit review work.

Funds for the Retrofit Program would be collected as part of an ordinance for new building standards to achieve the no-net increase in GHG emissions. It also could be included as a mitigation measure for projects falling under the purview of the California Environmental Quality Act (CEQA.)

The Retrofit Program would be financed by mitigation payments made by building developers to achieve a no-net increase in GHG emissions. Buildings would be required to meet the lowest achievable GHG emission levels, but then would pay to mitigate any remainders, including for transportation, charged at the current State Cap and Trade Program auction price for an extended collection of annual allowances[2] that cover emissions for the expected life of the building (e.g., 40 years) (CARB 2024).

M.Cubed proposed this financing mechanism for Sonoma County in its climate action plan.


[1] See https://birdreturns.org/

[2] Referred to as a “strip” in the finance industry.

Obstacles to nuclear power, but how much do we really need it?

Jonathan Rauch writes in the Atlantic Monthly about the innovations in nuclear power technology that might overcome its troubled history. He correctly identifies the core of the problem for nuclear power, although it extends even further than he acknowledges. Recent revelations about the fragility of France’s once-vaunted nuclear fleet illustrates deeper management problems with the technology. Unfortunately he is too dismissive of the safety issues and even the hazardous duties that recovery crews experienced at both Chernobyl and Fukushima. Both of those accidents cost those nations hundreds of billions of dollars. As a result of these issues, nuclear power around the world now costs over 10 cents per kilowatt-hour. Grid-scale solar and wind power in contrast costs less than four cents and even adding storage no more than doubles that cost. And this ignores the competition of small-scale distributed energy resources (DER) that could break the utility monopoly required to pay for nuclear power.

Yet Rauch’s biggest error is in asserting without sufficient evidence that nuclear power is required to achieve greenhouse gas emission reductions. Numerous studies (including for California) show that we can get to a 90% emission free and beyond power grid with current technologies and no nuclear. We have two decades to figure out how to get to the last 10% or less, or to determine if we even need to.

The problem with the new nuclear technologies such as small modular reactors (SMR) is that they must be built on a wide scale as a high proportion of the power supply to achieve the technological cost reductions of the type that we have seen for solar and batteries. And to get a low enough cost per kilowatt-hour, those units must run constantly in baseload mode, which only exacerbates the variable output issue for renewables instead of solving it. Running in a load following mode will increase the cost per kilowatt-hour by 50%.

We should continue research in this technology because there may be a breakthrough that solves these dilemmas. But we should not plan on needing it to save our future. We have been disappointed too many times already by empty promises from this industry.

Paradigm change: building out the grid with renewables requires a different perspective

Several observers have asserted that we will require baseload generation, probably nuclear, to decarbonize the power grid. Their claim is that renewable generation isn’t reliable enough and too distant from load centers to power an electrified economy.

Problem is that this perspective relies on a conventional approach to understanding and planning for future power needs. That conventional approach generally planned to meet the highest peak loads of the year with a small margin and then used the excess capacity to produce the energy needed in the remainder of the hours. This premise was based on using consumable fuel to store energy for use in hours when electricity was needed.

Renewables such as solar and wind present a different paradigm. Renewables capture and convert energy to electricity as it becomes available. The next step is to stored that energy using technologies such as batteries. That means that the system needs to be built to meet energy requirements, not peak loads.

Hydropower-dominated systems have already been built in this manner. The Pacific Northwest’s complex on the Columbia River and its branches for half a century had so much excess peak capacity that it could meet much of California’s summer demand. Meeting energy loads during drought years was the challenge. The Columbia River system could store up to 40% of the annual runoff in its reservoirs to assure sufficient supply.

For solar and wind, we will build enough capacity that is multiples of the annual peak load so that we can generate enough energy to meet those loads that occur when the sun isn’t shining and wind isn’t blowing. For example in a system relying on solar power, the typical demand load factor is 60%, i.e., the average load is 60% of the peak or maximum load. A typical solar photovoltaic capacity factor is 20%, i.e., it generates an average output that is 20% of the peak output. In this example system, the required solar capacity would be three times the peak demand on the system to produce sufficient stored electricity. The amount of storage capacity would equal the peak demand (plus a small reserve margin) less the amount of expected renewable generation during the peak hour.

As a result, comparing the total amount of generation capacity installed to the peak demand becomes irrelevant. Instead we first plan for total energy need and then size the storage output to meet the peak demand. (And that storage may be virtually free as it is embodied in our EVs.) This turns the conventional planning paradigm on its head.

Per Capita: Climate needs more than just good will

I wrote this guest column in the Davis Enterprise about the City’s Climate Action and Adaptation Plan. (Thank you John Mott-Smith for extending the privilege.)

Dear Readers, the guest column below was written by Richard McCann, a Davis resident and expert on energy and climate action plans.

————

The city of Davis is considering its first update of its Climate Action and Adaptation Plan since 2010 with a 2020-2040 Plan. The city plans to update the CAAP every couple of years to reflect changing conditions, technologies, financing options, laws and regulations.

The plan does not and cannot achieve a total reduction in greenhouse gas emissions simply because we do not control all of the emission sources — almost three-quarters of our emissions are from vehicles that are largely regulated by state and federal laws. But it does lay out a means to putting a serious dent in the overall amount. 

The CAAP offers a promising future and accepts that we have to protect ourselves as the climate worsens. Among the many benefits we can look forward to are avoiding volatile gas prices while driving cleaner, quieter cars; faster and more controllable cooking while eliminating toxic indoor air; and air conditioning and heating without having to make two investments while paying less.

To better adapt, we’ll have a greener landscape, filtered air for rental homes, and community shelter hubs powered by microgrids to ride out more frequent extreme weather.

We have already seen that adding solar panels raises the value of a house by as much as $4,000 per installed kilowatt (so a 5 kilowatt system adds $20,000). We can expect similar increases in home values with these new technologies due to the future savings, safety and convenience. 

Several state and federal laws and rules foretell what is coming. By 2045 California aims to be at zero net GHG emissions. That will require retiring all of the residential and commercial gas distribution lines. PG&E has already started a program to phase out its lines. A change in state rules will remove from the market several large natural gas appliances such as furnaces by 2030.

In addition, PG&E will no longer offer subsidies to developers to install gas lines to new homes starting next year. The U.S. Environmental Protection Agency appears poised to push further the use of electric appliances in areas with poor air quality such the Sacramento Valley. (Renewable gas and hydrogen will be too expensive and there won’t be enough to go around.)

Without sales to new customers or for replaced furnaces, the cost of maintaining the gas system will rise substantially so switching to electricity for cooking and water heating will save even more money. The CAAP anticipates this transition by having residents begin switching earlier. 

In addition, the recently enacted federal Inflation Reduction Act offers between $400 and $800 billion into funding these types of changes. The California Energy Commission’s budget for this year went from $1 billion to $10 billion to finance these transitions. The CAAP lays out a process for acquiring these financial sources for Davis and its residents. 

That said, some have objected to the CAAP as being too draconian and infringing on personal choices. The fact is that we are now in the midst of a climate emergency — the City Council endorsed this concern with a declaration in 2019. We’re already behind schedule to head off the worst of the threatening impacts. 

We won’t be able to rely solely on voluntary actions to achieve the reductions we need. That the CAAP has to include these actions proves that people have not been acting on their own despite a decade of cajoling since the last CAAP. While we’ve been successful at encouraging voluntary compliance with easy tasks like recycling, we’ve used mandatory permitting requirements to gain compliance with various building standards including energy efficiency measures. (These are usually enforced at point-of-sale of a house.)

We have a choice of mandatory ordinances, incentives through taxes or fees, and subsidies from grants and funds — voluntary just won’t deliver what’s needed. We might be able to financially help those least able to afford changing stoves, heaters or cars, but those funds will be limited. The ability to raise taxes or fees is restricted due to various provisions in the state’s constitution. So we are left with mandatory measures, applied at the most opportune moments. 

Switching to electricity for cooking and water heating may involve some costs, some or most of which will be offset by lower energy costs (especially as gas rates go up.) If you have an air conditioner, you’re likely already set up for a heat pump to replace your furnace — it’s a simple swap. Even so, you can avoid some costs by using a 120-volt induction cooktop instead of 240 volts, and installing a circuit-sharing plug or breaker for large loads to avoid panel upgrades. 

The CAAP will be fleshed out and evolve for at least the next decade. Change is coming and will be inevitable given the dire situation. But this change gives us opportunities to clean our environment and make our city more livable.  

Do small modular reactors (SMR) hold real promise?

The economic analyses of the projected costs for small modular reactors (SMRs) appear to rely on two important assumptions: 1) that the plants will run at capacity factors of current nuclear plants (i.e., 70%-90%+) and 2) that enough will be built quickly enough to gain from “learning by doing” on scale as has occurred with solar, wind and battery technologies. The problem with these assumptions is that they require that SMRs crowd out other renewables with little impact on gas-fired generation.

To achieve low costs in nuclear power requires high capacity factors, that is the total electricity output relative to potential output. The Breakthrough Institute study, for example, assumes a capacity factor greater than 80% for SMRs. The problem is that the typical system load factor, that is the average load divided by the peak load, ranges from 50% to 60%. A generation capacity factor of 80% means that the plant is producing 20% more electricity than the system needs. It also means that other generation sources such as solar and wind will be pushed aside by this amount in the grid. Because the SMRs cannot ramp up and down to the same degree as load swings, not only daily but also seasonally, the system will still need load following fossil-fuel plants or storage. It is just the flip side of filling in for the intermittency of renewables.

To truly operate within the generation system in a manner that directly displaces fossil fuels, an SMR will have to operate at a 60% capacity factor or less. Accommodating renewables will lower that capacity factor further. Decreasing the capacity factor from 80% to 60% will increase the cost of an SMR by a third. This would increase the projected cost in the Breakthrough Institute report for 2050 from $41 per megawatt-hour to $55 per megawatt-hour. Renewables with storage are already beating this cost in 2022 and we don’t need to wait 30 years.

And the Breakthrough Institute study relies questionable assumptions about learning by doing in the industry. First, it assumes that conventional nuclear will experience a 5% learning benefit (i.e., costs will drop 5% for each doubling of capacity). In fact, the industry shows a negative learning rate--costs per kilowatt have been rising as more capacity is built. It is not clear how the SMR industry will reverse this trait. Second, the learning by doing effect in this industry is likely to be on a per plant rather than per megawatt or per turbine basis as has been the case with solar and turbines. The very small unit size for solar and turbine allows for off site factory production with highly repetitive assembly, whereas SMRs will require substantial on-site fabrication that will be site specific. SMR learning rates are more likely to follow those for building construction than other new energy technologies.

Finally, the report does not discuss the risk of catastrophic accidents. The probability of a significant accident is about 1 per 3,700 reactor operating years. Widespread deployment of SMRs will vastly increase the annual risk because that probability is independent of plant size. Building 1,000 SMRs could increase the risk to such a level that these accidents could be happening once every four years.

The Fukushima nuclear plant catastrophe is estimated to have cost $300 billion to $700 billion. The next one could cost in excess of $1 trillion. This risk adds a cost of $11 to $27 per megawatt-hours.

Adding these risk costs on top of the adjusted capacity factor, the cost ranges rises to $65 to $82 per megawatt-hour.

A reply: two different ways California can keep the lights on amid climate change

Mike O’Boyle from Energy Innovation wrote an article in the San Francisco Chronicle listing four ways other than building more natural gas plants to maintain reliability in the state. He summarizes a set of solutions for when the electricity grid can get 85% of its supply from renewable sources, presumably in the next decade. He lists four options specifically:

  • Off shore wind
  • Geothermal
  • Demand response and management
  • Out of state imports

The first three make sense, although the amount of geothermal resources is fairly limited relative to the state’s needs. The problem is the fourth one.

California already imports about a fifth of its electric energy. If we want other states to also electrify their homes and cars, we need to allow them to use their own in-state resources. Further, the cost of importing power through transmission lines is much higher than conventional analyses have assumed. California is going to have to meet as much of its demands internally as possible.

Instead, we should be pursuing two other options:

  • Dispersed microgrids with provisions for conveying output among several or many customers who can share the system without utility interaction. Distributed solar has already reduced the state’s demand by 12% to 20% since 2006. This will require that the state modify its laws regulating transactions among customers and act to protect the investments of those customers against utility interests.
  • Replacing natural gas in existing power plants with renewable biogas. A UC Riverside study shows a potential of 68 billion cubic feet which is about 15% of current gas demand for electricity production. Instead of using this for home cooking, it can meet the limited peak day demands of the electricity grid.

Both of these solutions can be implemented much more quickly than an expanded transmission grid and building new resources in other states. They just take political will.

What “Electrify Everything” has wrong about “reduce, reuse, recycle”

Saul Griffith has written a book that highlights the role of electrification in achieving greenhouse gas emission reductions, and I agree with his basic premise. But he misses important aspects about two points. First, the need to reduce, reuse and recycle goes well beyond just energy consumption. And second, we have the ability to meet most if not all of our energy needs with the lowest impact renewable sources.

Reduce, reuse and recycle is not just about energy–it’s also about reducing consumption of natural resources such as minerals and biomass, as well as petroleum and methane used for plastics, and pollution caused by that consumption. In many situations, energy savings are only a byproduct. Even so, almost always the cheapest way to meet an energy need is to first reduce its use. That’s what energy efficiency is about. So we don’t want to just tell consumers to continue along their merry way, just switch it up with electricity. A quarter to a third our global GHG emissions are from resource consumption, not energy use.

In meeting our energy needs, we can largely rely on solar and wind supplemented with biofuels. Griffith asserts that the U.S. would need 2% of its land mass to supply the needed electricity, but his accounting makes three important errors. First, placing renewables doesn’t eliminate other uses of that land, particularly for wind. Acreage devoted to wind in particular can be used also for different types of farming and even open space. In comparison, fossil-fuel and nuclear plants completely displace any other land use. Turbine technology is evolving to limit avian mortality (and even then its tall buildings and household cats that cause most bird deaths). Second most of the solar supply can be met on rooftops and covering parking lots. These locations are cost effective compared to grid scale sources once we account for transmission costs. And third, our energy storage is literally driving down the road–in our new electric vehicles. A 100% EV fleet in California will have enough storage to meet 30 times the current peak load. A car owner will be able to devote less than 5% of their battery capacity to meet their home energy needs. All of this means that the real footprint can be much less than 1%.

Nuclear power has never lived up to its promise and is expensive compared to other low-emission options. While the direct costs of current-technology nuclear power is more than 12 cents a kilowatt-hour when adding transmission, grid-scale renewables are less than half of that, and distributed energy resources are at least comparable with almost no land-use footprint and able to provide better reliability and resilience. In addition, the potential of catastrophic events at nuclear plants adds another 1 to 3 cents per kilowatt-hour. Small modular reactors (SMR) have been promoted as a game changer, but we have been waiting for two decades. Nuclear or green hydrogen may emerge as economically-viable options, but we shouldn’t base our plans on that.

Guidelines For Better Net Metering; Protecting All Electricity Customers And The Climate

Authors Ahmad Faruqui, Richard McCann and Fereidoon Sioshansi[1] respond to Professor Severin Borenstein’s much-debated proposal to reform California’s net energy metering, which was first published as a blog and later in a Los Angeles Times op-ed.