Whenever somebody favors nuclear power, the big advantage they mention is the reliability of nuclear compared to renewables such as wind and photovoltaics. "Sun’s not always shining," they’ll say. Or, "sometimes the wind just doesn’t blow. That’s when you need nuclear, for that constant base-load."
Well, aside from the fact that nuclear costs at least twice as much per delivered KWH as most renewable and cogeneration options, the problem of intermittency that nuclear proponents always ascribe to renewables is a myth.
It’s true that the sun doesn’t always shine (especially at night), and the wind doesn’t always blow, but the issue of intermittency is being overcome in the U.S. and Europe on a large, commercial scale every day (see below). When the sun’s not shining, the wind is often blowing. Peak loads are usually during the day, so when the sun is shining, and photovoltaics deliver, they match demand nicely. To be sure, renewables incur “firming and integration costs,” but so does any source (in case you're not sure, firming refers to filling in for a lost source of energy, like a becalmed windmill, and is also referred to as regulating reserve capacity). The point to remember, is that real world, large-scale applications have proven that renewables are not only cleaner, but cheaper than nuclear.
To any doubters, I suggest they read “The Nuclear Illusion” by Amory Lovins and Imran Sheikh of the Rocky Mountain Institute. It’s a convincing, credible, amply-footnoted argument against nuclear power from an economic, least-cost perspective. Yes, economic. Not a bleeding heart, tree-hugging, liberal perspective. Economic.
Another, much briefer, but helpful article, "Estimating the impacts of wind power on power systems—summary of IEA Wind collaboration," points out that with wind meeting 20% of electricity demand, firming and integration costs are about 10% of the wholesale value of wind-generated electricity, or in dollar terms, in a 2004 Minnesota study, "a total integration cost of $4.60/MWh was found, where $0.23/MWh was due to increased regulation." That's $0.0046/KWh added to the average delivered cost of wind generated electricity of under $0.07/KWh -- well under the "bus-bar" cost to consumers of nuclear power, which averages around $0.14/KWh with subsidies excluded.
OK, you say, but that's with wind providing only 20% of demand. What about providing the other 80%? Well, not long ago naysayers claimed we would never even hit 20% and in some areas we have. Some studies show that we could reduce our consumption by 60%. (Sounds mad, I know, but our power consumption is shamefully inefficient.) We can (and will) install more photovoltaics, geothermal, and biomass generators. On-site micropower and cogeneration will play larger roles -- economics will demand it. Lot's of new solutions will arise if we let a truly free market guide us, keep misguided subsidies out of the equation, and factor in all of the costs -- including environmental and security -- when we consider our energy options.
I’ll quote a bit here from "Nuclear Illusions" on reliability and intermittency of renewables (for readability, I took out the footnotes, but you can find them in the original linked source above.) So, without further ado:
How do the competitors’ (Renewables & Cogeneration) reliability compare with nuclear power’s?
The nuclear industry’s central stated reason for omitting renewables, such as windpower (which accounts for nearly half the recent growth in decentralized renewables’ global capacity), from its list of admissible competitors with nuclear power is that windpower isn’t “24/7” or “reliable.”
Unlike some important sources of distributed renewable power—such as small hydro, geothermal, biofueled, and even much solar-thermal-electric generation—that can be dispatched whenever desired, windpower (and smaller but even faster-growing photovoltaics) do produce varying output depending on the weather. Yet this variability, often assumed to pose a fatal obstacle, becomes far less important in a renewable energy supply system using diverse technologies, because weather that’s bad for one source is good for another: stormy weather is generally good for windpower and hydro but bad for solar, while fine weather does the opposite. Diversifying locations helps too, because weather varies over areas that are often smaller than power grids. Technical reliability of single generating units is not the issue: modern wind turbines are ~98–99% available, far better than any thermal plant. The issue is rather the aggregate effect of some renewables’ variability. As we’ll now see, that effect is small. The United Kingdom has 2.6% the land area, 7.7% the 2005 grid capacity, and 9.9% the 2005 electricity usage of the United States. A 34-year, >15-million-site-hour analysis of UK wind data found excellent properties for reliable windpower and even better ones for its contributions to diversified renewable power supply. A review of more than 180 European analyses through 2005 confirmed that windpower’s variability even at penetrations of at least 20% for Europe, ~14% for Germany, or 30% for West Denmark are manageable at modest cost if renewables are properly dispersed, diversified, forecasted, and integrated with the existing grid and with demand response. Not one of more than 200 international studies has found significant costs or technical barriers to reliably integrating large variable renewable supplies into the grid.
U.S. utilities increasingly agree: Lawrence Berkeley National Laboratory (LBL-58450) notes that 2014 resource plans include 20% wind for SDG&E and 15% for Nevada Power—neither near a limiting value. Nine recent U.S. studies found that integrating windpower providing up to 31% of regional peak demand on Western utilities’ grids would incur firming and integration costs of 0.04–0.5¢/kWh, or ~1–15% of U.S. windpower’s 3.7¢/kWh 1999–2006 average price — far too little to disturb windpower’s two- to threefold cost advantage over new nuclear. Some renewables’ variability does require attention and proper engineering, but it’s neither a serious issue nor unique to renewables: the grid is already designed for the sudden and unexpected loss of big blocks of capacity from transmission or central-plant outages. Whenever renewable penetration levels of supposed concern have been approached in practice, they’ve faded over the hazy theoretical horizon. For example, as the West Danish system operator gained experience with windpower, he became confidently able to manage nearly five times more windpower than he had thought possible 7–8 years earlier. This horizon also continues to recede as distributed intelligence gradually permeates the grid and as more diversified combinations of resources are simulated. Recent University of Kassel field experiments have confirmed that just integrated wind, photovoltaics, and biogas generation could reliably provide all German electricity.
Power grids inherently cope with highly variable supply and demand. Demand varies from moment to moment as customers turn loads on and off; sudden variations, e.g. during the ads in popular televised UK sporting events, can ramp demand so rapidly (due largely to large water pumps when millions of toilets flush simultaneously, but euphemistically blamed on electric kettles) that utilities are hard-pressed to maintain stable supplies. Demand often varies widely from day to night and from summer to winter. Utility planners understand all this and design for it. Yet there is no technical difference between variations in demand and in supply; they are entirely fungible, and indeed onsite generation can be usefully considered a negative load.
Calm winds or cloudy skies last up to a few days in decent sites, but can be offset by complementary renewables at the same sites or by any renewables at more distant sites. (The distance needed for very uncorrelated output depends on regional geography and weather patterns, but is typically many hundreds of km.) Yet whether a given solar roof, wind turbine, or wind farm is working at a given moment is about as irrelevant to the system operator as whether a particular big office building’s chillers are on or off.
Moreover, all sources of electricity are unreliable—to differing degrees, for differing reasons, with differing frequencies, durations, failure sizes, and predictabilities. Major grid failures occur during regional blackouts, ice storms, and other disruptions. Individual power plants also break down: the average U.S. fossil-fuel-fired plant is unexpectedly out of service ~8% of the time. Power systems are designed to cope with all this too. Yet size does matter. Even if all sizes of generators were equally reliable, a single one-million-kilowatt unit would not be as reliable as the sum of a thousand 1-MW units or a million 1-kW units. Rather, a portfolio of many smaller units is inherently more reliable than one large unit—both because it’s unlikely that many units will fail simultaneously, and because 98–99% of U.S. power failures originate in the grid, which distributed generation largely or wholly bypasses. Research is increasingly showing that if we properly diversify renewable energy supplies in type and location, forecast the weather (as hydropower and windpower operators now do), and integrate renewables with existing demand- and supply-side resources on the grid, then renewables’ electrical supplies will be more reliable than current arrangements. That is, such a renewable- based power system, even if solar and wind form a large fraction of supply, will generally need less storage and backup capacity than we’ve already installed and paid for to cope with the intermittency of large thermal stations—which fail unpredictably, for long periods, in billionwatt chunks.
Though micropower’s unreliability is an unfounded myth, nuclear power’s unreliability is all too real. Nuclear plants are capital-intensive and run best at constant power levels, so operators go to great pains to avoid technical failures. These nonetheless occur occasionally, due to physical causes that tend to increase with age due to corrosion, fatigue, and other wear and tear. Some nuclear power failures are major and persistent: of the 132 U.S. nuclear units that were built and licensed to operate (52% of the original 253 orders), 21% were permanently shut down because of intractable reliability or cost issues (or in one case a meltdown), while a further 27% have suffered one or more forced outages of at least a year. When the remaining units work well, their output is indeed commendably steady and dependable, lately averaging ~90% capacity factor in the United States. However, even these relatively successful nuclear plants also present four unique reliability issues:
- Routine refueling, usually coordinated with scheduled major maintenance, shuts down the typical U.S. nuclear plant for 37 days every 17 months.
- In both Europe and the United States, prolonged heat waves have shut down or derated multiple nuclear plants when their sources of cooling water got too hot.
- A major accident or terrorist attack at any nuclear plant could cause most or all others in the same country or even in the world to be shut down, much as all 17 of Tokyo Electric Company’s nuclear units were shut down for checks in 2002–04 for many months, and some units for several years after falsified safety data came to light. Natural disaster can also intervene: a 7-unit Tokyo Electric Power Company (TEPCO) nuclear complex, the largest in the world—outproduced only by the Itaipu and Three Gorges Dams, and supplying 6–7% of Japan’s power—was indefinitely shut down by 2006 damage from an earthquake stronger than its supposedly impossible design basis, and remains down in spring 2008. Its output is being replaced by recommissioned and hastily finished oil-, gas-, and coal-fired plants; the operator’s extra cost in FY2007 alone was ~$5.6 billion.
- Unlike scheduled outages, many nuclear units can also fail simultaneously and without warning in regional blackouts, which necessarily and instantly shut down nuclear plants for safety. But nuclear physics then makes restart slow and delicate: certain neutronabsorbing fission products must decay before there are enough surplus neutrons for stable operation. Thus at the start of the 14 August 2003 northeast North American blackout, nine U.S. nuclear units totaling 7,851 MW were running perfectly at 100% output, but after emergency shutdown, they took two weeks to restart fully. They achieved 0% output on the first day after the midafternoon blackout, 0.3% the second day, 5.7% the third, 38.4% the fourth, 55.2% the fifth, and 66.8% the sixth. The average capacity loss was 97.5% for three days, 62.5% for five days, 59.4% for 7 days, and 53.2% for 12 days — hardly a reliable resource no matter how exemplary its normal operation. Canada’s restart was even rougher, with Toronto teetering for days on the brink of complete grid failure despite desperate appeals to turn everything off. This nuclear-physics characteristic of nuclear plants makes them “anti-peakers”—guaranteed unavailable when they’re most needed. The grid is designed to cope, and does cope, with such massive and prolonged centralstation outages, albeit with difficulty and at considerable cost for reserve margin, spinning reserve (spare capacity—generally coal-fired—kept running and synchronized for instant use), and replacement energy. The investments needed to manage central-thermal-plant intermittence (nuclear or fossil-fueled) have already been made and paid for. It is therefore hard to understand why the occasional and predictable becalming of wind farms or clouding of solar cells over a much smaller time and space, offset by higher output from statistically complementary renewable resources of other kinds or in other locations, is a problem. All generators—not just variable renewables—need reserves, backups, or storage to achieve a given level of reliability. It’s wrong to count these as a cost for variable renewables but not for intermittent thermal plants. Every source’s economics should duly reflect the amount of support they require for the desired reliability of retail service. The economic comparisons offered above for windpower (Fig. 1) make generous provision for these storage and backup costs (Fig. 1). In contrast, some other comparisons (even, astonishingly, one by the UK’s Royal Academy of Engineering) assume that any variable renewable resource needs 100% backup. That’s clearly wrong. Reliability is a statistical attribute of a power system, not an absolute attribute of a single unit, so on a statistical basis, wind and solar power do merit substantial “capacity credits” whose size depends on regional conditions. Grid operators care about the overall delivered-service reliability of a portfolio of technologically and geographically diversified units, integrated into a grid with diverse power resources and demandresponse options, all appropriately forecasted (and optionally with storage, like the pumpedhydro- storage units sometimes associated with nuclear units but seldom attributed to them as a cost, or the overnight heat storage built into some modern solar-thermal-electric plants). Thus a forecasted temporary shortage of, say, windpower is of concern to the grid operator only if it occurs at a time of maximum load and if no other resource is available. Already today, in wind-rich regions of North Germany, Spain, and Denmark, variable renewable power production exceeds regional demand, and annually provides 20–39% of all electricity, with no integration problems nor significant integration costs. As the European Wind Energy Association’s integration report stated in 2005, “[L]arger-scale integration of wind [power] does face barriers; not because of its variability but because of a series of market barriers in electricity markets that are neither free [n]or fair, coupled with a classic case of new technologies threatening old paradigm thinking and practice.”
No comments:
Post a Comment