Is the NASDAQ water futures market transparent enough?

Futures markets are settled either physically with actual delivery of the contracted product, or via cash based on the difference in the futures contract price and the actual purchase price. The NASDAQ Veles California Water Index future market is a cash settled market. In this case, the “actual” price is constructed by a consulting firm based on a survey of water transactions. Unfortunately this method may not be full reflective of the true market prices and, as we found in the natural gas markets 20 years ago, these can be easily manipulated.

Most commodity futures markets, such at the crude oil or pork bellies, have a specific delivery point, such as Brent North Sea Crude or West Texas Intermediate at Cushing, Oklahoma or Chicago for some livestock products. There is also an agreed upon set of standards for the commodities such as quality and delivery conditions. The problem with the California Water Index is that these various attributes are opaque or even unknown.

Two decades ago I compiled the most extensive water transfer database to date in the state. I understand the difficulty of collecting this information and properly classifying it. The bottom line is that there is not a simple way to clearly identify what is the “water transfer price” at any given time.

Water supplied for agricultural and urban water uses in California has many different attributes. First is where the water is delivered and how it is conveyed. While water pumped from the Delta gets the most attention, surface water comes from many other sources in the Sacramento and San Joaquin Valleys, as well as from the Colorado River. The cost to move this water greatly varies by location ranging from gravity fed to a 4,000 foot lift over the Tehachapis.

Second is the reliability and timing of availability. California has the most complex set of water rights in the U.S. and most watersheds are oversubscribed. A water with a senior right delivered during the summer is more valuable than a junior right delivered in the winter.

Third is the quality of the water. Urban districts will compete for higher quality sources, and certain agricultural users can use higher salinity sources than others.

A fourth dimension is that water transfers are signed for different periods and delivery conditions as well as other terms that directly impact prices.

All of these factors lead to a spread in prices that are not well represented by a single price “index”. This becomes even more problematic when a single entity such as the Metropolitan Water District enters the market and purchases one type of water which they skews the “average.” Bart Thompson at Stanford has asked whether this index will reflect local variations sufficiently.

Finally, many of these transactions are private deals between public agencies who do not reveal key attributes these transfers, particularly price, because there is not an open market reporting requirement. A subsequent study of the market by the Public Policy Institute of California required explicit cooperation from these agencies and months of research. Whether a “real time” index is feasible in this setting is a key question.

The index managers have not been transparent about how the index is constructed. The delivery points are not identified, nor are the sources. Whether transfers are segmented by water right and term is not listed. Whether certain short term transfers such as the State Water Project Turnback Pool are included is not listed. Without this information, it is difficult to measure the veracity of the reported index, and equally difficult to forecast the direction of the index.

The housing market has many of these same attributes, which is one reason why you can’t buy a house from a central auction house or from a dealer. There are just too many different dimensions to be considered. There is housing futures market, but housing has one key difference from the water transfer market–the price and terms are publicly reported to a government agency (usually a county assessor). Companies such as CoreLogic collect and publish this data (that is distributed by Zillow and Redfin.)

In 2000, natural gas prices into California were summarized in a price index reported by Natural Gas Intelligence. The index was based a phone survey that did not require verification of actual terms. As part of the electricity crisis that broke that summer, gas traders found that they could manipulate gas prices for sales to electricity generators higher by simply misreporting those prices or by making multiple sequential deals that ratcheted up the price. The Federal Energy Regulatory Commission and Commodity Futures Trading Commission were forced to step in and establish standards for price reporting.

The NASDAQ Veles index has many of the same attributes as the gas market had then but perhaps with even less regulatory protections. It is not clear how a federal agency could compel public agencies, including the U.S. Bureau of Reclamation, to report and document prices. Oversight of transactions by water districts is widely dispersed and usually assigned to the local governing board.

Trying to introduce a useful mechanism to this market sounds like an attractive option, but the barriers that have impeded other market innovations may be too much.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

ERCOT has the peak period scarcity price too high

The freeze and resulting rolling outages in Texas in February highlighted the unique structure of the power market there. Customers and businesses were left with huge bills that have little to do with actual generation expenses. This is a consequence of the attempt by Texas to fit into an arcane interpretation of an economic principle where generators should be able to recover their investments from sales in just a few hours of the year. Problem is that basic of accounting for those cashflows does not match the true value of the power in those hours.

The Electric Reliability Council of Texas (ERCOT) runs an unusual wholesale electricity market that supposedly relies solely on hourly energy prices to provide the incentives for incenting new generation investment. However, ERCOT is using the same type of administratively-set subsidies to create enough potential revenue to cover investment costs. Further, a closer examination reveals that this price adder is set too high relative to actual consumer value for peak load power. All of this leads to a conclusion relying solely on short-run hourly prices as a proxy for the market value that accrues to new entrants is a misplaced metric.

The total ERCOT market first relies on side payments to cover commitment costs (which creates barriers to entry but that’s a separate issue) and second, it transfers consumer value through to the Operating Reserve Demand Curve (ORDC) that uses a fixed value of lost load (VOLL) in an arbitrary manner to create “opportunity costs” (more on that definition at a later time) so the market can have sufficient scarcity rents. This second price adder is at the core of ERCOT’s incentive system–energy prices alone are insufficient to support new generation investment. Yet ERCOT has ignored basic economics and set this value too high based on both available alternatives to consumers and basic regional budget constraints.

I started with an estimate of the number of hours where prices need the ORDC to be at full VOLL of $9000/MWH to recover the annual revenue requirements of combustion turbine (CT) investment based on the parameters we collected for the California Energy Commission. It turns out to be about 20 to 30 hours per year. Even if the cost in Texas is 30% less, this is still more 15 hours annually, every single year or on average. (That has not been happening in Texas to date.) Note for other independent system operators (ISO) such as the California ISO (CAISO), the price cap is $1,000 to $2,000/MWH.

I then calculated the cost of a customer instead using a home generator to meet load during those hours assuming a life of 10 to 20 years on the generator. That cost should set a cap on the VOLL to residential customers as the opportunity cost for them. The average unit is about $200/kW and an expensive one is about $500/kW. That cost ranges from $3 to $5 per kWh or $3,000 to $5,000/MWH. (If storage becomes more prevalent, this cost will drop significantly.) And that’s for customers who care about periodic outages–most just ride out a distribution system outage of a few hours with no backup. (Of course if I experienced 20 hours a year of outage, I would get a generator too.) This calculation ignores the added value of using the generator for other distribution system outages created by events like a hurricane hitting every few years, as happens in Texas. That drives down this cost even further, making the $9,000/MWH ORDC adder appear even more distorted.

The second calculation I did was to look at the cost of an extended outage. I used the outages during Hurricane Harvey in 2017 as a useful benchmark event. Based on ERCOT and U.S. Energy Information Reports reports, it looks like 1.67 million customers were without power for 4.5 days. Using the Texas gross state product (GSP) of $1.9 trillion as reported by the St. Louis Federal Reserve Bank, I calculated the economic value lost over 4.5 days, assuming a 100% loss, at $1.5 billion. If we assume that the electricity outage is 100% responsible for that loss, the lost economic value per MWH is just under $5,000/MWH. This represents the budget constraint on willingness to pay to avoid an outage. In other words, the Texas economy can’t afford to pay $9,000/MWH.

The recent set of rolling blackouts in Texas provides another opportunity to update this budget constraint calculation in a different circumstance. This can be done by determining the reduction in electricity sales and the decrease in state gross product in the period.

Using two independent methods, I come up with an upper bound of $5,000/MWH, and likely much less. One commentator pointed out that ERCOT would not be able achieve a sufficient planning reserve level at this price, but that statement is based on the premises that short-run hourly prices reflect full market values and will deliver the “optimal” resource mix. Neither is true.

This type of hourly pricing overemphasizes peak load reliability value and undervalues other attributes such as sustainability and resilience. These prices do not reflect the full incremental cost of adding new resources that deliver additional benefits during non-peak periods such as green energy, nor the true opportunity cost that is exercised when a generator is interconnected rather than during later operations. Texas has overbuilt its fossil-fueled generation thanks to this paradigm. It needs an external market based on long-run incremental costs to achieve the necessary environmental goals.

What is driving California’s high electricity prices?

This report by Next10 and the University of California Energy Institute was prepared for the CPUC’s en banc hearing February 24. The report compares average electricity rates against other states, and against an estimate of “marginal costs”. (The latter estimate is too low but appears to rely mostly on the E3 Avoided Cost Calculator.) It shows those rates to be multiples of the marginal costs. (PG&E’s General Rate Case workpapers calculates that its rates are about double the marginal costs estimated in that proceeding.) The study attempts to list the reasons why the authors think these rates are too high, but it misses the real drivers on these rate increases. It also uses an incorrect method for calculating the market value of acquisitions and deferred investments, using the current market value instead of the value at the time that the decisions were made.

We can explore the reasons why PG&E’s rates are so high, much of which is applicable to the other two utilities as well. Starting with generation costs, PG&E’s portfolio mismanagement is not explained away with a simple assertion that the utility bought when prices were higher. In fact, PG&E failed in several ways.

First, PG&E knew about the risk of customer exit as early as 2010 as revealed during the PCIA rulemaking hearings in 2018. PG&E continued to procure as though it would be serving its entire service area instead of planning for the rise of CCAs. Further PG&E also was told as early as 2010 (in my GRC testimony) that it was consistently forecasting too high, but it didn’t bother to correct thee error. Instead, service area load is basically at the save level that it was a decade ago.

Second, PG&E could have procured in stages rather than in two large rounds of request for offers (RFOs) which it finished by 2013. By 2011 PG&E should have realized that solar costs were dropping quickly (if they had read the CEC Cost of Generation Report that I managed) and that it should have rolled out the RFOs in a manner to take advantage of that improvement. Further, they could have signed PPAs for the minimum period under state law of 10 years rather than the industry standard 30 years. PG&E was managing its portfolio in the standard practice manner which was foolish in the face of what was occurring.

Third, PG&E failed to offer part of its portfolio for sale to CCAs as they departed until 2018. Instead, PG&E could have unloaded its expensive portfolio in stages starting in 2010. The ease of the recent RPS sales illustrates that PG&E’s claims about creditworthiness and other problems had no foundation.

I calculated the what the cost of PG&E’s mismanagement has been here. While SCE and SDG&E have not faced the same degree of exit by CCAs, the same basic problems exist in their portfolios.

Another factor for PG&E is the fact that ratepayers have paid twice for Diablo Canyon. I explain here how PG&E fully recovered its initial investment costs by 1998, but as part of restructuring got to roll most of its costs back into rates. Fortunately these units retire by 2025 and rates will go down substantially as a result.

In distribution costs, both PG&E and SCE requested over $2 billion for “new growth” in each of its GRCs since 2009, despite my testimony showing that growth was not going to materialize, and did not materialize. If the growth was arising from the addition of new developments, the developers and new customers should have been paying for those additions through the line extension rules that assign that cost responsibility. The utilities’ distribution planning process is opaque. When asked for the workpapers underlying the planning process, both PG&E and SCE responded that the entirety were contained in the Word tables in each of their testimonies. The growth projections had not been reconciled with the system load forecasts until this latest GRC, so the totals of the individual planning units exceeded the projected total system growth (which was too high as well when compared to both other internal growth projections and realized growth). The result is a gross overinvestment in distribution infrastructure with substantial overcapacity in many places.

For transmission, the true incremental cost has not been fully reported which means that other cost-effective solutions, including smaller and closer renewables, have been ignored. Transmission rates have more than doubled over the last decade as a result.

The Next10 report does not appear to reflect the full value of public purpose program spending on energy efficiency, in large part because it uses a short-run estimate of marginal costs. The report similarly underestimates the value of behind-the-meter solar rooftops as well. The correct method for both is to use the market value of deferred resources–generation, transmission and distribution–when those resources were added. So for example, a solar rooftop installed in 2013 was displacing utility scale renewables that cost more than $100 per megawatt-hour. These should not be compared to the current market value of less than $60 per megawatt-hour because that investment was not made on a speculative basis–it was a contract based on embedded utility costs.

Drawing too many conclusions about electric vehicles from an obsolete data set

The Energy Institute at Haas at the University of California published a study allegedly showing that electric vehicles are driven about only one-third of the average standard car in California. I responded with a response on the blog.

Catherine Wolfram writes, “But, we do not see any detectable changes in our results from 2014 to 2017, and some of the same factors were at play over this time period. This makes us think that newer data might not be dramatically different, but we don’t know.“

A recent study likely is delivering a biased estimate of future EV use. The timing of this study reminds me of trying to analyze cell phone use in the mid-2000s. Now household land lines are largely obsolete, and we use phones even more than we did then. The period used for the analysis was during a dramatically changing period more akin to solar panel evolution just before and after 2010, before panels were ubiquitous. We can see this evolution here for example. Comparing the Nissan Leaf, we can see that the range has increased 50% between the 2018 and 2021 models.

The primary reason why this data set is seeing such low mileage is because is almost certain that the vast majority of the households in the survey also have a standard ICE vehicle that they use for their extended trips. There were few or no remote fast charge stations during that time and even Tesla’s had limited range in comparison. In addition, it’s almost certain that EV households were concentrated in urban households that have a comparatively low VMT. (Otherwise, why do studies show that these same neighborhoods have low GHG emissions on average?) Only about one-third of VMT is associated with commuting, another third with errands and tasks and a third with travel. There were few if any SUV EVs that would be more likely to be used for errands, and EVs have been smaller vehicles until recently.

As for copurchased solar panel installation, these earlier studies found that 40% or more of EV owners have solar panels, and solar rooftop penetration has grown faster than EV adoption since these were done.

I’m also not sure that the paper has captured fully workplace and parking structure charging. The logistical challenges of gaining LCFS credits could be substantial enough for employers and municipalities to not bother. This assumption requires a closer analysis of which entities are actually claiming these credits.

A necessary refinement is to compare this data to the typical VMT for these types of households, and to compare the mileage for model types. Smaller commuter models average less annual VMT according to the California Energy Commission’s vehicle VMT data set derived from the DMV registration file and the Air Resources Board’s EMFAC model. The Energy Institute analysis arrives at the same findings that EV studies in the mid 1990s found with less robust technology. That should be a flag that something is amiss in the results.

How to increase renewables? Change the PCIA

California is pushing for an increase in renewable generation to power its electrification of buildings and the transportation sector. Yet the state maintains a policy that will impede reaching that goal–the power cost indifference adjustment (PCIA) rate discourages the rapidly growing community choice aggregators (CCAs) from investing directly in new renewable generation.

As I wrote recently, California’s PCIA rate charged as an exit fee on departed customers is distorting the electricity markets in a way that increases the risk of another energy crisis similar to the debacle in 2000 to 2001. An analysis of the California Independent System Operator markets shows that market manipulations similar to those that created that crisis likely led to the rolling blackouts last August. Unfortunately, the state’s energy agencies have chosen to look elsewhere for causes.

The even bigger problem of reaching clean energy goals is created by the current structure of the PCIA. The PCIA varies inversely with the market prices in the market–as market prices rise, the PCIA charged to CCAs and direct access (DA) customers decreases. For these customers, their overall retail rate is largely hedged against variation and risk through this inverse relationship.

The portfolios of the incumbent utilities, i.e., Pacific Gas and Electric, Southern California Edison and San Diego Gas and Electric, are dominated by long-term contracts with renewables and capital-intensive utility-owned generation. For example, PG&E is paying a risk premium of nearly 2 cents per kilowatt-hour for its investment in these resources. These portfolios are largely impervious to market price swings now, but at a significant cost. The PCIA passes along this hedge through the PCIA to CCAs and DA customers which discourages those latter customers from making their own long term investments. (I wrote earlier about how this mechanism discouraged investment in new capacity for reliability purposes to provide resource adequacy.)

The legacy utilities are not in a position to acquire new renewables–they are forecasting falling loads and decreasing customers as CCAs grow. So the state cannot look to those utilities to meet California’s ambitious goals–it must incentivize CCAs with that task. The CCAs are already game, with many of them offering much more aggressive “green power” options to their customers than PG&E, SCE or SDG&E.

But CCAs place themselves at greater financial risk under the current rules if they sign more long-term contracts. If market prices fall, they must bear the risk of overpaying for both the legacy utility’s portfolio and their own.

The best solution is to offer CCAs the opportunity to make a fixed or lump sum exit fee payment based on the market value of the legacy utility’s portfolio at the moment of departure. This would untie the PCIA from variations in the future market prices and CCAs would then be constructing a portfolio that hedges their own risks rather than relying on the implicit hedge embedded in the legacy utility’s portfolio. The legacy utilities also would have to manage their bundled customers’ portfolio without relying on the cross subsidy from departed customers to mitigate that risk.

The PCIA is heading California toward another energy crisis

The California ISO Department of Market Monitoring notes in its comments to the CPUC on proposals to address resource adequacy shortages during last August’s rolling blackouts that the number of fixed price contracts are decreasing. In DMM’s opinion, this leaves California’s market exposed to the potential for greater market manipulation. The diminishing tolling agreements and longer term contracts DMM observes is the result of the structure of the power cost indifference adjustment (PCIA) or “exit fee” for departed community choice aggregation (CCA) and direct access (DA) customers. The IOUs are left shedding contracts as their loads fall.

The PCIA is pegged to short run market prices (even more so with the true up feature added in 2019.) The PCIA mechanism works as a price hedge against the short term market values for assets for CCAs and suppresses the incentives for long-term contracts. This discourages CCAs from signing long-term agreements with renewables.

The PCIA acts as an almost perfect hedge on the retail price for departed load customers because an increase in the CAISO and capacity market prices lead to a commensurate decrease in the PCIA, so the overall retail rate remains the same regardless of where the market moves. The IOUs are all so long on their resources, that market price variation has a relatively small impact on their overall rates.

This situation is almost identical to the relationship of the competition transition charge (CTC) implemented during restructuring starting in 1998. Again, energy service providers (ESPs) have little incentive to hedge their portfolios because the CTC was tied directly to the CAISO/PX prices, so the CTC moved inversely with market prices. Only when the CAISO prices exceeded the average cost of the IOUs’ portfolios did the high prices become a problem for ESPs and their customers.

As in 1998, the solution is to have a fixed, upfront exit fee paid by departing customers that is not tied to variations in future market prices. (Commissioner Jesse Knight’s proposal along this line was rejected by the other commissioners.) By doing so, load serving entities (LSEs) will be left to hedging their own portfolios on their own basis. That will lead to LSEs signing more long term agreements of various kinds.

The alternative of forcing CCAs and ESP to sign fixed price contracts under the current PCIA structure forces them to bear the risk burden of both departed and bundled customers, and the IOUs are able to pass through the risks of their long term agreements through the PCIA.

California would be well service by the DMM to point out this inherent structural problem. We should learn from our previous errors.

Advanced power system modeling need not mean more complex modeling

A recent article by E3 and Form Energy in Utility Dive calls for more granular temporal modeling of the electric power system to better capture the constraints of a fully-renewable portfolio and the requirements for supporting technologies such as storage. The authors have identified the correct problem–most current models use a “typical week” of loads that are an average of historic conditions and probabilistic representations of unit availability. This approach fails to capture the “tail” conditions where renewables and currently available storage are likely to be sufficient.

But the answer is not a full blown hour by hour model of the entire year with many permutations of the many possibilities. These system production simulation models already take too long to run a single scenario due to the complexity of this giant “transmission machine.” Adding the required uncertainty will cause these models to run “in real time” as some modelers describe it.

Instead a separate analysis should first identify the conditions under which renewables + current technology storage are unlikely to meet demand sufficiently. These include drought that limits hydropower, extreme weather, and extended weather that limits renewable production. Then these conditions can input into the current models to assess how the system responds.

The two important fixes which has always been problem in these models are to energy-limited resources and unit commitment algorithms. Both of these are complex problems, and these models have not done well in scheduling seasonal hydropower pondage storage and in deciding which units to commit to meet a high demand several days ahead. (And these problems are also why relying solely on hourly bulk power pricing doesn’t give an accurate measure of the true market value of a resource.) But focusing on these two problems is much easier than trying to incorporating the full range of uncertainty for all 8,760 hours for at least a decade into the future.

We should not confuse precision with accuracy. The current models can be quite precise on specific metrics such as unit efficiency as different load points, but they can be inaccurate because they don’t capture the effect of load and fuel price variations. We should not be trying to achieve spurious precision through more complete granular modeling–we should be focusing on accuracy in the narrow situations that matter.

“What are public benefits of conveyance?” presented to the California Water Commission

Maven’s Notebook posted a summary of presentations to the California Water Commission by Richard McCann of M.Cubed, Steve Hatchett of Era Economics, and David Sunding of the Brattle Group. Many of my slides are included.

The Commission is developing a framework that might be used to identify how shares of conveyance costs might be funded by the state of California. The Commission previously awarded almost $3 billion in bond financing for a dozen projects under the Proposition 1B Water Storage Investment Program (WSIP). That process used a prescribed method including a Technical Guide that determined the eligible public benefits for financing by the state. M.Cubed supported the application by Irvine Ranch Water District and Rio Bravo-Rosedale Water Storage District for the Kern Fan water bank.

Vegetation maintenance the new “CFL” for wildfire management

PG&E has been aggressively cutting down trees as part of its attempt to mitigate wildfire risk, but those efforts may be creating their own risks. Previously, PG&E has been accused of just focusing numeric targets over effective vegetation management. This situation is reminiscent of how the utilities pursued energy efficiency prior to 2013 with a seemingly single-minded focus on compact fluorescent lights (CFLs). And that focus did not end well, including leading to both environmental degradation and unearned incentives for utilities.

CFLs represented about 20% of the residential energy efficiency program spending in 2009. CFLs were easy for the utilities–they just delivered steeply discounted, or even free, CFLs to stores and they got to count each bulb as an “energy savings.” By 2013, the CPUC ordered the utilities to ramp down spending on CFLs as a new cost-effective technology emerged (LEDs) and the problem of disposing of mercury in the ballasts of CFLs became apparent. But more importantly, it turned out that CFLs were just sitting in closets, creating much fewer savings than estimated. (It didn’t help that CFLs turned out to have a much shorter life than initially estimated as well.) Even so, the utilities were able claim incentives from the California Public Utilities Commission. Ultimately, it became apparent that CFLs were largely a mistake in the state’s energy efficiency portfolio.

Vegetation management seems to be the same “easy number counting” solution that the utilities, particularly PG&E, have adopted. The adverse consequences will be significant and it won’t solve the problem in the long. Its one advantage is that it allows the utilities to maintain their status quo position at the center of the utility network.

Other alternatives include system hardening such as undergrounding or building microgrids in rural communities to allow utilities to deenergize the grid while maintaining local power. The latter option appears to be the most cost effective solution, but it is also the most threatening to the current position of the incumbent utility by giving customers more independence.

Davis, like many communities, needs a long-term vision

The Davis Vanguard published an article about the need to set out a vision for where the City of Davis wants to go if we want to have a coherent set of residential and commercial development decisions:

How do we continue to provide high quality of life for the residents of Davis, as the city on the one hand faces fiscal shortfalls and on the other hand continues to price the middle class and middle tier out of this community? A big problem that we have not addressed is the lack of any long term community vision. 

The article set out a series of questions that focused on assumptions and solutions. But we should not start the conversation with choosing a growth rate and then picking a set of projects that fit into that projection.

We need to start with asking a set of questions that derive from the thesis of the article:

  • – What is the composition that we want of this community? What type of diversity? How do we accommodate students? What are the ranges of statewide population growth that we need to plan for?
  • – To achieve that community composition, what is the range of target housing price? Given the projected UCD enrollment targets (which are basically out of our control), how much additional housing is needed under different scenarios of additional on campus housing?
  • – What is the jobs mix that supports that community composition under different scenarios? What’s the job mix that minimizes commuting and associated GHG emissions? 
  • – What’s the mix of businesses, jobs and housing that move toward fiscal stability for the City in these scenarios? 
  • – Then in the end we arrive at a set of preferred growth rates that are appropriate for the scenarios that we’ve constructed. We can then develop our general plan to accommodate these preferred scenarios. 

My wife and I put forward one vision for Davis to focus on sustainable food development as an economic engine. I’m sure there’s other viable ideas. We need a forum that dives into these and formulates our economic plan rather than just bumbling along as we seem to be doing now. This is only likely to get worse with the fundamental changes after the pandemic.

I’ll go further to say that one of the roots of this problem is the increasing opaqueness of City decision making. “Playing it safe” is the byword for City planning, just when that’s what is most likely to hurt us. That’s why we proposed a fix to the fundamental way decisions are made by the City.

There’s a long list of poor decisions created by this opaqueness that shows how this has cost the City tens of millions of dollars. He points out symptoms of a much deeper problem that is impeding us from developing a long term vision.

It may seem like so much “inside baseball” to focus on the nuts and bolts of process, but its that process that is at the root of the crisis, as boring as that may seem.