To fully understand the concept of capacity factor, it is important to first understand the concepts of capacity and energy and the units associated with them.
So how do capacity and energy relate to capacity factor? Each generating unit has a rated capacity, also known as the maximum power rating. This quantity defines the maximum power in megawatts that the unit is designed to provide to the grid. However, many units can be operated at levels well below their rated capacity. For example, an operator may have a 300 MW rated unit, but only needs 200 MW at a certain point in time. So, the unit is operated at 200 MW, even though it could produce 300.
The amount of electricity put onto the grid over time is called energy, and is determined by the unit’s actual operating level multiplied by the amount of time the unit is run. This quantity is stated in MWh. For instance, if the 300 MW unit is run at 200 MW for two hours, it will have an output of 200 MW x 2 hours, or 400 MWh.
The ratio of a unit’s actual output to its maximum possible output at its rated capacity is called capacity factor. In the example of the 300 MW unit whose output was 400 MWh over two hours, the unit would have a capacity factor of 400 MWh divided by 300 MW x 2 hours, or 600 MWh, which would be its maximum output. So, the capacity factor of the unit for those two hours was 67%. Capacity factor is used to determine how fully a unit’s capacity is utilized.
Capacity factors vary significantly by unit type. Based on Energy Information Administration (EIA) data for 2019, here are U.S. capacity factors by fuel type:
Fuel Type/Capacity Factor