Scrutica
Facility-level FLOP capacity estimated against active and historical regulatory thresholds. Power-path estimation calibrated to the SemiAnalysis 100K H100 cluster decomposition (PUE 1.15, GPU share of IT load 0.49, dense FP16 specs); residuals shown alongside the EU AI Office’s ±30% measurement tolerance band from the July 2025 GPAI Guidelines. Reverse calculator solves for minimum GPUs or training days against any selected regime.
The model-counting question — “how many notable models cross threshold X?” — is Epoch AI’s territory. This page asks the facility-level question: which sites have the physical compute to put a model across the EU AI Act 1025line (or the lower China CAC line, or the rescinded US EO 14110 line), and under whose jurisdiction do they sit? Calibrated against measured cluster decompositions; aligned with the EU AI Office’s July 2025 measurement tolerance.
Power-path defaults calibrated to the SemiAnalysis 100K H100 decomposition (PUE 1.15, GPU share 0.49); residuals shown alongside the EU AI Office's ±30% measurement tolerance. 14 of 15 reference models cross EU 1025; 3 would have crossed the now-rescinded US 1026. Three-path FLOP estimation, cross-path divergence flagged.Which facilities have the physical compute to put a model across the EU AI Act 1025 line, the China CAC ~1024 line, or (historically) the US EO 14110 1026 line? And under whose jurisdiction do they sit? The model-counting question is Epoch's; this page asks the facility-level question their dataset can't answer.
FACILITIES ≤ 30 DAYS (10²⁵ FLOP)
~358
FACILITIES ≤ 90 DAYS
~491
FACILITIES ≤ 1 YEAR
~624
MODELS ABOVE EU
~14
MODELS ABOVE US
~3
EU AI Act — Systemic Risk GPAI
Cumulative training compute (total FLOP), not inference throughput. Article 51(2): "A general-purpose AI model shall be presumed to have high impact capabilities ... when the cumulative amount of compute used for its training measured in floating point operations is greater than 10^25." The July 202...
Regulation (EU) 2024/1689, Article 51(2) + Annex XIII · Official Journal of the European Union · Tier 1
Threshold-Capable Facilities by Country (within 1 year, 10²⁵ FLOP)
⚠ = facility is in a jurisdiction where EU AI Act — Systemic Risk GPAI applies
| Facility | Country | Daily FLOP | Days to 10²⁵ FLOP | Estimation Path | Regime Exposure | |
|---|---|---|---|---|---|---|
| United Arab Emirates | ~8.85 × 10²⁵ | <1 | Power: 5000 MW | Not subject | ||
| United Arab Emirates | ~8.85 × 10²⁵ | <1 | Power: 5000 MW | Not subject | ||
| United States | ~4.00 × 10²⁵ | <1 | Power: 2260 MW | Not subject | ||
| United States | ~3.89 × 10²⁵ | <1 | Power: 2200 MW | Not subject | ||
| United States | ~3.89 × 10²⁵ | <1 | Power: 2200 MW | Not subject | ||
| United States | ~3.54 × 10²⁵ | <1 | Power: 2000 MW | Not subject | ||
| United States | ~3.18 × 10²⁵ | <1 | Power: 1800 MW | Not subject | ||
| Saudi Arabia | ~2.65 × 10²⁵ | <1 | Power: 1500 MW | Not subject | ||
| United States | ~2.48 × 10²⁵ | <1 | Power: 1400 MW | Not subject | ||
| United States | ~2.48 × 10²⁵ | <1 | Power: 1400 MW | MAY APPLY | ||
| United States | ~2.12 × 10²⁵ | <1 | Power: 1200 MW | Not subject | ||
| United States | ~2.03 × 10²⁵ | <1 | Hardware: 700,000 GPUs (H100 assumed) | Not subject | ||
| United States | ~1.93 × 10²⁵ | <1 | Power: 1092 MW | Not subject | ||
| United States | ~1.77 × 10²⁵ | <1 | Power: 1000 MW | Not subject | ||
| United States | ~1.77 × 10²⁵ | <1 | Power: 1000 MW | Not subject | ||
| United States | ~1.77 × 10²⁵ | <1 | Power: 1000 MW | Not subject | ||
| United States | ~1.77 × 10²⁵ | <1 | Power: 1000 MW | Not subject | ||
| United Arab Emirates | ~1.77 × 10²⁵ | <1 | Power: 1000 MW | Not subject | ||
| United States | ~1.60 × 10²⁵ | <1 | Hardware: 550,000 GPUs (H100 assumed) | Not subject | ||
| United States | ~1.59 × 10²⁵ | <1 | Power: 900 MW | Not subject | ||
| United States | ~1.59 × 10²⁵ | <1 | Power: 900 MW | Not subject | ||
| United States | ~1.54 × 10²⁵ | <1 | Hardware: 530,000 GPUs (H100 assumed) | Not subject | ||
| United States | ~1.51 × 10²⁵ | <1 | Power: 854 MW | Not subject | ||
| United States | ~1.45 × 10²⁵ | <1 | Hardware: 500,000 GPUs (H100 assumed) | Not subject | ||
| France | ~1.45 × 10²⁵ | <1 | Hardware: 500,000 GPUs (H100 assumed) | SUBJECT | ||
| United States | ~1.42 × 10²⁵ | <1 | Power: 800 MW | Not subject | ||
| United States | ~1.42 × 10²⁵ | <1 | Power: 800 MW | Not subject | ||
| United States | ~1.38 × 10²⁵ | <1 | Power: 782 MW | Not subject | ||
| United States | ~1.33 × 10²⁵ | <1 | Power: 750 MW | MAY APPLY | ||
| ~1.33 × 10²⁵ | <1 | Power: 750 MW | Not subject | |||
| India | ~1.31 × 10²⁵ | <1 | Hardware: 450,000 GPUs (H100 assumed) | Not subject | ||
| United States | ~1.28 × 10²⁵ | <1 | Power: 725 MW | Not subject | ||
| ~1.24 × 10²⁵ | <1 | Power: 700 MW | MAY APPLY | |||
| Australia | ~1.19 × 10²⁵ | <1 | Power: 675 MW | Not subject | ||
| United States | ~1.16 × 10²⁵ | <1 | Hardware: 400,000 GPUs (H100 assumed) | Not subject | ||
| United States | ~1.06 × 10²⁵ | <1 | Power: 600 MW | MAY APPLY | ||
| Brazil | ~1.06 × 10²⁵ | <1 | Power: 600 MW | Not subject | ||
| United States | ~1.06 × 10²⁵ | <1 | Power: 600 MW | Not subject | ||
| United States | ~1.06 × 10²⁵ | <1 | Power: 600 MW | Not subject | ||
| Chile | ~1.06 × 10²⁵ | <1 | Power: 600 MW | Not subject | ||
| ~1.06 × 10²⁵ | <1 | Power: 600 MW | Not subject | |||
| United States | ~1.04 × 10²⁵ | <1 | Power: 590 MW | Not subject | ||
| United States | ~1.02 × 10²⁵ | <1 | Power: 576 MW | Not subject | ||
| United States | ~9.82 × 10²⁴ | ~1 | Power: 555 MW | Not subject | ||
| United States | ~9.40 × 10²⁴ | ~1 | Power: 531 MW | Not subject | ||
| United States | ~8.85 × 10²⁴ | ~1 | Power: 500 MW | Not subject | ||
| United States | ~8.85 × 10²⁴ | ~1 | Power: 500 MW | Not subject | ||
| Chile | ~8.85 × 10²⁴ | ~1 | Power: 500 MW | Not subject | ||
| United States | ~8.85 × 10²⁴ | ~1 | Power: 500 MW | Not subject | ||
| United States | ~8.85 × 10²⁴ | ~1 | Power: 500 MW | Not subject |
Estimates assume continuous operation at full training capacity using dense FP16 TFLOP/s (no sparsity). Default MFU: 40%. Default interconnect efficiency: 85%. Hardware-based estimates assume H100 SXM5 where GPU model is unspecified. Power-based estimates use PUE 1.15 and 49% GPU share of IT load (calibrated against the SemiAnalysis 100K H100 cluster decomposition; see Calibration tab). Residuals from this calibration sit inside the EU AI Office's ±30% measurement tolerance for cumulative training FLOP under the July 2025 GPAI Guidelines. Methodology version: 1.3.1.