Tracking the physical infrastructure buildout powering AI. Interconnection queues, hyperscaler capex, equipment bottlenecks, and commodity demand.
Quarterly capital expenditure ($ billions)
Total queue (GW) vs. data center related requests
DC-related capex (Q4 2025) and growth rates
Forward guidance range (shaded) vs reported capex
Source: Earnings call transcripts, SEC filings. Guidance ranges from CFO commentary. Null actuals = quarter not yet reported.
Higher ratio = more aggressive infrastructure buildout relative to revenue
Oracle's high ratio reflects its aggressive OCI expansion relative to its smaller revenue base. Apple's low ratio reflects diversified manufacturing capex.
Q4 2025 — click column headers to sort
| Company | Capex ($B) ▼ | QoQ | YoY | DC % | Capex/Rev | Cumul. '24-'25 |
|---|---|---|---|---|---|---|
AmazonAMZN | $38.0B | +11.1% | +68.1% | 72% | 17.8% | $198.5B |
AlphabetGOOGL | $27.9B | +16.2% | +95.1% | 75% | 24.5% | $139.3B |
MicrosoftMSFT | $24.2B | +7.1% | +53.2% | 80% | 29.8% | $140.5B |
MetaMETA | $22.1B | +13.9% | +110.5% | 85% | 36.9% | $105.4B |
OracleORCL | $12.0B | +44.6% | +200.0% | 85% | 74.5% | $44.4B |
AppleAAPL | $2.4B | -33.3% | -20.0% | 42% | 1.7% | $24.0B |
ByteDance— | $0.0B | +0.0% | +0.0% | 0% | — | $36.2B |
xAI— | $0.0B | +0.0% | +0.0% | 0% | — | $35.5B |
OpenAI— | $0.0B | +0.0% | +0.0% | 0% | — | $18.5B |
Anthropic— | $0.0B | +0.0% | +0.0% | 0% | — | $9.6B |
Tesla— | $0.0B | +0.0% | +0.0% | 0% | — | $22.4B |
| Total | $126.6B | $774.4B |
Actual + projected GW (AI vs traditional workloads)
Sources: EIA, Goldman Sachs, EPRI. Data through 2025 is actual. 2026+ are consensus projections.
Regional breakdown: North America, Europe, Asia (GW)
Large language model training requires multi-GW campuses
Enterprise cloud migration accelerating across all regions
5G and IoT driving distributed compute at network edge
DC semiconductor demand, shipments & leading-edge fab capacity
| Fab | Company | Node | Wafers/mo | DC Alloc. | Est. DC Wafers | Status |
|---|---|---|---|---|---|---|
| Fab 18 (Tainan) | TSMC | 3nm/4nm | 110,000 | 45% | 49,500 | operational |
| Fab 21 (Arizona P1) | TSMC | 4nm | 20,000 | 60% | 12,000 | ramping |
| Fab 21 (Arizona P2) | TSMC | 3nm | TBD | 70% | — | planned |
| Taylor, TX | Samsung | 2nm GAA | TBD | 30% | — | planned |
| Pyeongtaek S5 | Samsung | 3nm GAA | 30,000 | 25% | 7,500 | ramping |
| Ohio Fab 1 | Intel | Intel 18A | TBD | 40% | — | planned |
| Kumamoto (JASM) | TSMC | 12nm/28nm | 55,000 | 10% | 5,500 | ramping |
TSMC CoWoS advanced packaging is the primary bottleneck for NVIDIA GPU production. Each GB200 requires 2x the CoWoS area of an H100.
| # | Market | ISO | IT Load | Pipeline | Vacancy | Key Players |
|---|---|---|---|---|---|---|
| 1 | Northern Virginia VA | PJM | 4.2 GW | +5.8 GW | 0.8% | AWS, Microsoft, Google |
| 2 | Dallas-Fort Worth TX | ERCOT | 2.0 GW | +6.5 GW | 2.5% | AWS, Google, Meta |
| 3 | Phoenix / Mesa AZ | CAISO* | 1.5 GW | +3.2 GW | 1.7% | Microsoft, Google, Meta |
| 4 | Central Ohio OH | PJM | 0.9 GW | +2.4 GW | 4.2% | AWS, Google, Meta |
| 5 | Atlanta GA | SOCO | 0.8 GW | +1.9 GW | 3.1% | Microsoft, Google, Switch |
| 6 | Chicago IL | PJM | 0.7 GW | +1.4 GW | 5.5% | Equinix, Digital Realty, QTS |
| 7 | Silicon Valley CA | CAISO | 0.6 GW | +0.5 GW | 1.8% | Equinix, CoreSite, Vantage |
| 8 | Portland / Hillsboro OR | BPA/PGE | 0.5 GW | +1.2 GW | 2.2% | Google, Meta, QTS |
| 9 | Denver / Aurora CO | WAPA | 0.4 GW | +0.8 GW | 4.8% | Lumen, Flexential, CoreSite |
| 10 | Reno / Las Vegas NV | NV Energy | 0.3 GW | +1.5 GW | 3.5% | Switch, Apple, Google |
Power economics by market: wholesale LMP, retail, PPA, all-in cost, lease rates
| Market | ISO | LMP $/MWh | Retail $/MWh | PPA $/MWh | All-In $/kW ▲ | Lease $/kW | Vacancy | Power % Opex |
|---|---|---|---|---|---|---|---|---|
Reno / Las Vegas NV | NV Energy | $34 | $58 | $40 | $68 | $82 | 3.5% | 33% |
Portland / Hillsboro OR | BPA/PGE | $28 | $55 | $38 | $72 | $88 | 2.2% | 32% |
Denver / Aurora CO | WAPA | $30 | $62 | $42 | $78 | $95 | 4.8% | 34% |
Central Ohio OH | PJM | $35 | $68 | $48 | $88 | $100 | 4.2% | 36% |
Atlanta GA | SOCO | $36 | $70 | $50 | $92 | $105 | 3.1% | 37% |
Phoenix / Mesa AZ | WAPA/APS | $38 | $72 | $52 | $95 | $110 | 2.5% | 35% |
Dallas-Fort Worth TX | ERCOT | $32 | $65 | $45 | $105 | $120 | 3.8% | 38% |
Chicago IL | PJM | $40 | $82 | $55 | $110 | $130 | 5.5% | 40% |
Northern Virginia VA | PJM | $48 | $78 | $62 | $135 | $150 | 1.2% | 42% |
Silicon Valley CA | CAISO | $58 | $125 | $75 | $165 | $185 | 1.8% | 48% |
All-in power cost = wholesale energy + transmission + distribution + demand charges (net of PPA discount). Lease rate = wholesale colocation (1+ MW). Click column headers to sort.
Composition of all-in power cost by market ($/kW/month)
Power cost vs lease rate (bubble size = vacancy rate)
Best economics = lower-left (low power cost, low lease rate). Tightest markets = small bubble (low vacancy).
2020-2025 by market ($/kW/month)
Green area above red = positive margin. Power costs rising faster than lease rates in most markets, compressing operator margins.
Selected hyperscale and AI compute facilities
| Developer | Location | Capacity | Investment | Status |
|---|---|---|---|---|
| xAI | Memphis, TN (Phase 3)(TN) | 1.5 GW | $10.0B | under construction |
| CoreWeave | NJ / TX / various(NJ) | 1.2 GW | $8.0B | under construction |
| Amazon (AWS) | Mississippi(MS) | 1.0 GW | $10.0B | announced |
| Meta | Temple, TX(TX) | 800 MW | $5.0B | planning |
| OpenAI / Stargate | New Mexico(NM) | 800 MW | $15.0B | planning |
| OpenAI / Stargate | Ohio(OH) | 800 MW | — | planning |
| OpenAI / Stargate | Wisconsin(WI) | 800 MW | — | planning |
| Microsoft | Chesterfield County, VA(VA) | 600 MW | $3.0B | under construction |
| OpenAI / Stargate | Texas (Expansion)(TX) | 600 MW | — | planning |
| Microsoft | Mount Pleasant, WI(WI) | 500 MW | $3.3B | under construction |
| Lancium | Abilene, TX(TX) | 500 MW | — | planning |
| xAI | Memphis, TN(TN) | 500 MW | $18.0B | operational |
| Amazon (Anthropic) | Indiana(IN) | 500 MW | $11.0B | operational |
| Amazon (AWS) | Loudoun County, VA(VA) | 450 MW | — | under construction |
| Kansas City, MO(MO) | 400 MW | $2.0B | under construction | |
| Columbus, OH(OH) | 400 MW | $1.8B | planning | |
| Applied Digital | Ellendale, ND(ND) | 400 MW | $2.0B | planning |
| Anthropic / Fluidstack | Lake Mariner, NY(NY) | 360 MW | $6.0B | planning |
| QTS (Blackstone) | Manassas, VA(VA) | 300 MW | $2.0B | under construction |
| Anthropic / Fluidstack | Texas(TX) | 250 MW | $5.0B | under construction |
| Crusoe Energy | Abilene, TX(TX) | 200 MW | — | under construction |
| OpenAI / Stargate | Abilene, TX (Flagship)(TX) | 200 MW | $20.0B | under construction |
NVIDIA GB200 NVL72 racks at 120+ kW require liquid cooling. The industry is in a rapid transition.
Evaporative cooling towers consume 1-2M gal/MW/year. Cannot support next-gen GPU racks. Being phased out for high-density deployments.
Efficiency improvement over traditional air cooling. Still requires significant air volume for high-density racks.
Hybrid approach — liquid at rack level, no raised floor changes. Popular as retrofit for existing facilities.
Cold plates on CPUs/GPUs. Standard for GB200/GB300 NVL72 racks (120-150+ kW). Vertiv MegaMod HDX (Jan 2026), CoolIT AHx240, Schneider, ZutaCore.
Servers submerged in dielectric fluid. GRC, LiquidCool Solutions. Eliminates fans entirely. Best PUE < 1.03.
Fluid boils on chip surface. Highest density support. LiquidCool, Iceotope. Still early-stage for hyperscale.
No water consumed. Higher capital cost. Becoming mandated in water-stressed regions (Phoenix, parts of Texas).
PUE (Power Usage Effectiveness) and water consumption. PUE of 1.0 = perfect efficiency.
Potomac watershed. Loudoun County implementing water reuse requirements.
Extreme heat drives higher cooling needs. Groundwater restrictions tightening. Push toward dry cooling.
Trinity River watershed. Drought periods create intermittent constraints.
Great Lakes proximity. Cooler climate reduces cooling load.
Lake Michigan water supply. Cold winters enable free cooling 5-6 months/year.
Chattahoochee watershed. 2007 drought raised long-term concerns.
California drought regulations. Many operators switching to adiabatic/dry cooling.
Mild Pacific NW climate. Hydroelectric power abundance. Natural free cooling 8+ months/year.
Mile-high altitude aids cooling. Colorado River compact constraints on long-term water rights.
Desert climate drives high cooling load. Lake Mead levels critical. Switch uses 100% renewable + dry cooling.
Substation buildout — single largest bottleneck
On-site transformer yards for DC campuses
Power distribution within DC facilities
N+1 redundancy requirement for Tier III/IV DCs
Battery backup for ride-through during generator start
Air-cooled data centers, decreasing as liquid cooling grows
Required for NVIDIA GB200+ racks (100-120 kW/rack)
Grid interconnection from substation to DC
Evaporative heat rejection for chilled water loops
Per-rack power monitoring and distribution
Physical server mounting and cable management
Power distribution from UPS to row-level PDUs
FM-200 / Novec 1230 fire protection for IT rooms
Utility-to-generator switchover for uptime SLAs
Temperature/humidity control for enterprise and colocation DCs
Intra-campus and metro fiber connectivity
Electrical wiring, busbars, transformers, cables. Largest single-facility consumer of copper.
Liquid cooling systems require 30% more copper per rack than air-cooled.
Power cables, heat sinks, structural framing. Substitute for copper in some applications.
Copper substitution driving aluminum adoption in cable trays and busbars.
Structural steel for buildings, transformer cores, racking systems.
Hyperscale campus buildout accelerating. Multi-building campuses require more structural steel.
Gas turbine backup generation. Increasingly replacing diesel for on-site power.
Behind-the-meter gas plants for baseload (xAI Memphis, Crusoe) driving demand.
High-conductivity connections, solar panel interconnects for on-site generation.
Stable per-facility demand but total demand growing with facility count.
Battery backup systems (UPS + grid-scale). Growing with longer ride-through requirements.
4-hour grid-scale BESS replacing diesel gensets. UPS battery density increasing.
Evaporative cooling. PUE target of 1.1-1.3. Becoming a permitting constraint in arid regions.
Shift to direct-to-chip liquid cooling and adiabatic systems reducing water consumption.
Foundations, containment structures, raised floors, and site work.
Multi-story DC designs in land-constrained markets increasing concrete volume.
Backup generator testing and emergency operations. Declining as gas/battery replaces diesel.
Natural gas turbines and BESS displacing diesel for backup and peaker use.
Battery cathodes (NMC chemistry), stainless steel in cooling systems and structural components.
LFP batteries gaining share over NMC in stationary storage, offsetting volume growth.
Galvanized steel structural members, cable trays, and outdoor equipment enclosures.
Proportional to steel consumption. Corrosion protection requirements unchanged.
Permanent magnets in fans, cooling pumps, and precision motors. HDD magnets declining.
SSD replacing HDD eliminates voice coil magnets. Partial offset from cooling pump motors.
R-134a, R-410A, and next-gen low-GWP alternatives for chiller and CRAC systems.
Transitioning to low-GWP alternatives (R-1234yf, R-454B). Volume stable as liquid cooling grows.
20-year PPA. TMI-1 restart expected 2028. Largest single corporate nuclear deal.
Originally 960 MW behind-the-meter (FERC denied Nov 2024). Restructured Jun 2025 as 1,920 MW front-of-meter PPA through 2042 (~$18B).
First commercial SMR PPA. 6 reactors, delivery starting 2030.
Investment in X-energy + off-take agreement for Xe-100 reactors.
Larry Ellison announced 3 SMRs for data center campus. Design partner TBD.
Ohio DC campus. First announced SMR-powered data center project.
Multiple co-location deals at existing nuclear plants. Aggregate estimate.
20-year PPA. Deliveries begin Q4 2027, full capacity by 2032. Texas.
20-year PPAs. Ohio and Pennsylvania plants including 433 MW of uprates.
Pike County, OH campus. Phased deployment beginning ~2030 into PJM.
Up to 8 Natrium reactors. 2.8 GW baseload + 1.2 GW storage, peak 4 GW. Two initial units (690 MW) ~2032.
TVA signed 50 MW PPA with Kairos for Google data centers in Tennessee/Alabama.
| Name | Owner | Chip | Chips | H100e ↓ | Power (MW) | Country | Status |
|---|---|---|---|---|---|---|---|
| xAI Colossus | xAI | NVIDIA H100 | 100,000 | 100,000 | 150 | United States | Operational |
| Meta RSC-2 | Meta | NVIDIA H100 | 24,576 | 24,576 | 40 | United States | Operational |
| CoreWeave Chicago | CoreWeave | NVIDIA H100 | 16,384 | 16,384 | 28 | United States | Operational |
| Microsoft Eagle | Microsoft | NVIDIA H100 | 14,400 | 14,400 | 24 | United States | Operational |
| Google TPU v5p Pod | Google TPU v5p | 8,960 | 12,500 | 20 | United States | Operational |
Pipeline scoreboard, vacancy rates, power availability, REIT metrics.
DC stock heatmap, capex/revenue, nuclear PPAs, valuation matrix.
Ground training & inference costs in real commodity prices.
Interactive map of ~120 major data centers by owner, market, and status.
Water & electricity footprint, ENERGY STAR PUE scores, efficiency trends.
Model how data center load affects grid stability and power prices.
Projected electricity demand from AI workloads through 2030.
Transformer lead times, equipment supply chains constraining DC buildout.
Live AWS, Azure, GCP incident monitoring and status history.