Data Centers and the Grid: An Electrical Engineer's Assessment of the Most Consequential Load Growth in a Generation

Read Article
grid infrastructure blog
hero-jobbies-7

Data Centers and the Grid: An Electrical Engineer's Assessment of the Most Consequential Load Growth in a Generation

The numbers are, by any historical standard, extraordinary. U.S. data center demand for grid power is projected to rise from roughly 50 gigawatts in 2024 to more than 134 gigawatts by 2030. Worldwide, data center electricity consumption is forecast by Gartner to nearly double from 448 terawatt-hours in 2025 to 980 terawatt-hours by 2030. Goldman Sachs has estimated that approximately $720 billion will need to be invested in grid upgrades through the end of the decade to support this growth.

These are not fringe projections from technology optimists. They represent the midpoint of a range of credible forecasts from organizations including the Department of Energy, the International Energy Agency, and major investment banks that have every financial incentive to get the numbers right. The underlying driver is not speculative. It is the observed and accelerating deployment of AI infrastructure, which requires enormous computational power and the electricity to run it.

As engineers who work on power infrastructure every day, we want to offer a clear-eyed assessment of what this means technically, what the constraints are, and what the realistic path forward looks like for facility operators, developers, and utilities navigating this environment.

The Engineering Reality of AI-Driven Load

Traditional enterprise data centers were designed around relatively modest and predictable power densities, typically in the range of 5 to 10 kilowatts per rack. The AI workloads driving today's hyperscale expansion operate at fundamentally different densities. GPU clusters for AI training and inference can require 50 to 100 kilowatts per rack or more, and leading-edge deployments are pushing toward 200 kilowatts per rack in some configurations.

This is not simply a quantity problem. It is a density problem that cascades through every major system in the facility. Power distribution architecture must be redesigned to deliver higher currents at higher densities while maintaining the redundancy standards required for mission-critical operations. Cooling systems, which can account for 40 percent of a data center's total energy consumption in less efficient designs, must evolve from traditional computer room air conditioning to liquid cooling approaches that can extract heat directly from the compute hardware. Electrical infrastructure from the utility service entrance through medium-voltage switchgear, transformers, UPS systems, and busway must be sized and configured for loads that would have seemed implausible in a data center context a decade ago.

AI-optimized servers are projected to represent 44 percent of total data center power usage by 2030, up from 21 percent today. Their electricity consumption is forecast to rise nearly fivefold in five years.

For the electrical engineer, the practical implications begin at the substation. A hyperscale AI data center campus can require 500 megawatts to a gigawatt of dedicated capacity. There are very few locations in the United States where that capacity exists in the transmission grid today. Securing it requires early engagement with transmission operators, often including negotiated agreements for grid upgrades that the data center developer may need to fund entirely or partially.

Interconnection Strategy and the Site Selection Calculus

Power availability has become the dominant site selection criterion for data center developers. Northern Virginia, which hosts approximately 12 gigawatts of data center demand in 2025, has experienced significant grid congestion that is reshaping where new capacity can locate within the region. Texas, the second-largest data center market, is experiencing similar dynamics as a market that was considered highly attractive for its deregulated power environment now confronts the physical limits of its transmission network in the highest-growth areas.

For developers who cannot wait for utility-provided power within their commercial timelines, a new set of strategies has emerged. Bring-your-own-power approaches range from deploying mobile gas turbines to bridge the gap while permanent grid service is established, to developing long-term behind-the-meter generation partnerships with power producers. Data center developers representing approximately 30 percent of planned U.S. capacity have announced plans to power their operations with behind-the-meter resources, with 90 percent of those announcements coming in 2025 alone.

Nuclear energy is also experiencing a genuine renaissance in the data center context. Hyperscalers signed multiple power purchase agreements for nuclear capacity in 2025, including agreements for both operating facilities and facilities that had previously retired. The appeal is straightforward: nuclear provides carbon-free, firm, baseload power that is not dependent on weather conditions and does not require the land area of utility-scale solar or wind. Small Modular Reactors, though not yet commercially deployed at scale, are being evaluated by data center developers as a potential long-term behind-the-meter option.

The Distribution and Substation Challenge

Much of the public discussion about data centers and energy focuses on generation adequacy and transmission constraints. In our experience, the distribution system often presents the most immediate bottleneck for getting large loads connected within a useful timeframe.

Utility distribution substations that serve data center-heavy corridors are experiencing transformer capacity constraints, protection system limits, and voltage regulation challenges that require significant capital investment and multi-year upgrade timelines. For a data center developer, securing a letter of intent from a utility for grid service is only the beginning. Understanding the specific substation constraints, the queue position for upgrades, and the realistic probability of energization within the commercial schedule requires detailed technical engagement with the serving utility, often supported by independent engineering analysis.

We regularly perform substation capacity assessments for clients that reveal significant gaps between what a utility's initial response suggests is available and what the system can actually deliver within a given timeframe. These gaps are not the result of bad faith. They reflect the genuine complexity of distribution system planning in an environment of unprecedented load growth. But identifying them early in the development process can save clients from committing to sites or timelines that the grid simply cannot support.

Efficiency as an Engineering Imperative

One dimension of the data center power challenge that does not always receive adequate attention in the infrastructure discussion is efficiency. While demand growth is undeniable, the rate of consumption growth is not fixed. Hardware efficiency improvements have historically been substantial. Modern AI processors are orders of magnitude more efficient than equivalent hardware from fifteen years ago, and this trend continues even as the absolute demand grows.

Cooling system efficiency is a particularly high-value target. Facilities that have moved from traditional air-based cooling to liquid cooling approaches, whether direct-to-chip or rear-door heat exchange systems, have reduced cooling's share of total facility power consumption substantially. Power Usage Effectiveness, the industry-standard metric for data center energy efficiency, has improved from an average of 2.5 in 2007 to approximately 1.55 in 2022 for modern facilities, and hyperscale operators are achieving values below 1.2 in optimized designs.

For engineers working on data center projects, the efficiency design decisions made at the infrastructure level have compounding effects over the facility lifecycle. A 10 percent reduction in PUE at a 100-megawatt facility represents 10 megawatts of load reduction, which translates directly into reduced infrastructure investment, lower operating costs, and reduced grid impact. Getting these design decisions right requires a level of engineering rigor that matches the scale of what is being built.

The Grid Stakeholder Role

Perhaps the most significant shift in the relationship between data centers and the electrical grid is the emergence of data center operators as active grid stakeholders rather than passive consumers. The National Electrical Manufacturers Association has articulated a clear value proposition for utilities: data centers that are capable of islanding from the grid through on-site generation and storage can serve as demand response resources during peak periods, reducing stress on the system that their aggregate load would otherwise create.

This represents a genuine alignment of interests. Utilities need load flexibility to manage a grid that is simultaneously absorbing large amounts of variable renewable generation and serving unprecedented peak demands. Data centers need reliable power and, increasingly, the ability to operate independently when grid conditions deteriorate. A well-designed behind-the-meter system that can island, participate in demand response programs, and provide grid support services during normal operations is not just a resilience investment for the facility operator. It is an infrastructure asset for the broader system.

NEMA has been explicit in encouraging standardized electrical infrastructure frameworks for data centers to facilitate this grid partnership model. Moving from proprietary designs to more standardized approaches improves both the facility's ability to work with its serving utility and the broader system's ability to integrate large, flexible loads reliably.

What We Recommend to Clients

For data center developers and operators engaging with power infrastructure questions, our recommendations center on three principles. First, start early. Power availability analysis, interconnection strategy, and substation capacity assessments should begin at the site selection stage, not after a site has been committed. The cost of discovering a four-year interconnection timeline after a development agreement has been signed is far greater than the cost of thorough engineering diligence upfront.

Second, plan for the full range of scenarios. The most resilient power strategies combine utility grid service with on-site generation and storage in a configuration that can be optimized for cost during normal operations and can sustain critical loads during grid disruptions. Designing for only one scenario, whether assuming grid reliability or assuming grid unavailability, creates unnecessary risk.

Third, engage with utilities as partners rather than vendors. The utilities serving the highest-growth data center markets are under extraordinary pressure. Operators who bring technically rigorous interconnection applications, who engage constructively in discussions about load flexibility and demand response, and who design their facilities to be grid assets rather than grid burdens will find more receptive regulatory and utility environments than those who approach the relationship transactionally.

The data center industry is building the infrastructure that will define the next era of the American economy. Getting the power engineering right is not a secondary consideration. It is foundational.

Best Energy Consulting provides power availability studies, large-load interconnection strategy, substation engineering, microgrid design, and electrical infrastructure consulting for data center developers, hyperscalers, and colocation operators. Contact us at www.bestenergyconsulting.com.