Capacity Accreditation: An Introduction
Part 1 of our Multi-Part Blog Series on Capacity Accreditation and ELCC
Written by: Eric Pinsker-Smith, Sr. Analyst; Erin Smith, Principal Analyst; John Keene, Sr. Director; and Raghu Palavadi Naga, Director
Publish Date: April 6, 2022
Estimated Reading Time: 7 Minutes
As states in the northeastern United States aim to decarbonize their electrical grids in the coming decades, grid operators are faced with the challenge of efficiently integrating variable, renewable resources into the electricity mix. One such challenge is how to value resources in capacity markets. Capacity accreditation is the practice of measuring and valuing a given resource’s contribution to maintaining resource adequacy, and is a critical part of the market design that is required to integrate renewable resources while ensuring system reliability and minimizing ratepayer impacts. As electrical grids evolve to incorporate renewable resources, the methods of capacity accreditation that they employ must also evolve.
While the energy from renewable resources is essential to the clean energy transition, grid operators have been challenged to consider their reliability contributions and reflect them appropriately in the capacity markets. Failure to do so would lead to a misalignment between the capacity market prices and the reliability needs of the system, and resultingly, (a) grid reliability could be undermined, and (b) the consumer cost of the energy transition could increase. Generally, current capacity accreditation techniques are based on a variety of simplifications that may no longer be justified. For instance, the current method for accrediting renewables in the NYISO relies on a resource’s historical performance over a specified peak period, and does not account for any correlation between outputs from various resources. As renewable energy penetration increases, not considering the correlation between the output of a resource with other operating resources’ output could result in overestimating the capacity values of both renewable and conventional resources. The reliability value of an individual renewable resource tends to decline as the penetration of that type of resource increases. Similarly, the reliability value of gas-only generators in a winter-peaking system, particularly when a subset of those gas generators is dependent on the same pipeline, is likely to be lower than their current accredited levels. For these reasons, several RTOs and ISOs around the country are exploring various approaches to capacity accreditation.
Although there are several approaches to capacity accreditation, one in particular called the effective load carrying capability (ELCC) has received an increasing amount of attention of late. ELCC is typically expressed as a percent of a resource’s nameplate capacity and is a measure of a resource’s contribution to grid reliability during periods of heightened risk of load shedding. Despite its recent popularization, ELCC is not a new concept—it has been around for decades and has historically been used for measuring resource adequacy in a integrated resource planning context. ELCC, as it pertains to capacity accreditation, would determine the amount of capacity a resource can sell in the capacity market. For example, if a 50 MW wind generator has an ELCC of 25%, it is deemed to be able to contribute 12.5 MW toward the grid’s capacity requirement and can sell 12.5MW in the capacity market.
The ELCC of a resource is determined by the timing of i) when a resource can generate electricity and ii) when the grid is most likely to need additional generating capacity. The variable and weather-dependent nature of many renewable generators can thus lead to those generators having different ELCC values across different regions and across different seasons. Additionally, because the timing of electricity shortfalls on a grid is dependent on the type of generators it is relying upon, the quantity of a given resource type affects that resource’s own ELCC. For example, a grid that has an abundance of solar resources is likely to need evening and nighttime capacity more than it needs daytime capacity. Thus, adding additional solar resources to the grid could actually lower the ELCC of all solar generators, while likely increasing the ELCC of resources that can generate at night (such as wind or storage resources).
The process of estimating a resource’s ELCC can be computationally intensive: at a high level, it involves (a) estimating a system’s initial loss-of-load-expectation (LOLE), (b) adding an incremental unit of capacity of a certain resource type to the grid, and then (c) removing “perfect capacity” (capacity that is dispatchable and always available for whatever duration a reliability event requires) until the system’s LOLE declines to the level observed in (a). The quantity of removed perfect capacity in (c) is the ELCC-based capacity value of the resource that was incrementally added in (b).
Most competitive electricity markets assign some form capacity credit to each resource type; this is necessary to allow grid operators to appropriately see what resources are available to balance the grid, and to enable resources to be compensated for their reliability contributions. The prevailing methods for capacity accreditation, however, don’t typically measure interactions between non-firm resources. Thus, the fact that ELCC values reflect symbioses between generator types (and lack thereof) make ELCC a great tool for capacity accreditation in a system with high renewable penetration.
In this vein, there are two common ways of calculating the ELCC of a resource for the purpose of capacity accreditation. Average ELCC reflects the contribution of all resources of a specific type to system reliability. Under the average approach, a resource’s capacity value is calculated by dividing the aggregate ELCC of every unit of that resource type by the total MW of installed nameplate capacity of the resource type. Marginal ELCC, by contrast, reflects the incremental reliability value of the next MW of installed capacity of a given resource type. Under the marginal approach, the capacity value of a resource is calculated by diving the individual unit’s ELCC by the MW of installed nameplate capacity of that unit.
The marginal ELCC approach to capacity accreditation is likely to compensate resources based on their incremental reliability value as the penetration and resource mix evolves. While this is a desirable feature of market design, this approach could also result in heightened capacity revenue volatility (relative to the average approach) over the project life, thus increasing the financial risk to investors. One key limitation of the average ELCC approach is that it must grapple with how to develop resource classes and allocate capacity credit. For instance, solar and battery storage can enhance the other’s ELCC because solar energy can push evening peak demand into fewer hours (making it easier to meet with limited-duration storage), while energy storage can extend the delivery of solar energy into hours when the sun isn’t shining. Hence, the ELCC of the entire solar portfolio is higher when storage is present on the grid, and vice versa. Allocating the additional capacity value resulting from these beneficial interactive effects to different resource types necessarily involves subjective decision making that can bias results.
Understanding these drivers and distinctions is critical, as the final methodology could have considerable impact on the capacity value of a given resource, and consequently the revenues it would realize from the capacity market. SEA will explore these potential revenue ramifications of different ELCC approaches in a forthcoming blog.
Regional grid operators are already working to reform their capacity accreditation approaches. NYISO, for instance, recently submitted its capacity accreditation filing with FERC in which it proposed using a marginal ELCC values. It is also considering an alternative metric – the marginal reliability improvement (MRI) methodology (as opposed to ELCC) – to value the capacity of different resource classes. NYISO indicated the relying on MRI would be less computationally complex and time consuming than ELCC. Meanwhile, ISO-NE is engaging with stakeholders on suitable capacity accreditation approaches including ELCC. ISO-NE currently treats each MW of qualified capacity as perfectly substitutable but recognizes that, in reality, the marginal contribution to resource adequacy of one resource may not be the same as another resource. ISO-NE is planning to present a capacity accreditation proposal to its stakeholders in late 2022 and anticipates filing with FERC by the end of 2023 in time for implementation in Forward Capacity Auction #19 (FCA 19) for the 2028-2029 Capacity Commitment Period (CCP).
Capacity accreditation is not going away anytime soon, and conversations about the merits of different capacity accreditation techniques are relevant now more than ever. Stay tuned for a forthcoming blog where we will discuss the various ELCC strategies that northeastern grid operators are deploying, updates from FERC (if any), and the effects of ELCC on investment signals and consumer costs.