Normalized pricing in Scrutica Compute Units ($/petaFLOP-day): hourly published rates converted via Epoch AI hardware specs (dense BF16 TFLOP/s, no 2:4 structured sparsity). Provider coverage: AWS, Azure, GCP, CoreWeave, Lambda. On-demand, spot, and reserved pricing. On-premise TCO comparison at 3-year and 5-year horizons. Estimate facility compute capacity → FLOP Engine.
What does it cost to train AI in different places, on different hardware, through different providers? This index normalizes pricing across cloud providers so you can compare like with like. The same unit of computing power can cost 3-10x more depending on where you buy it and how; that pricing spread determines which organizations can realistically afford frontier training. Prices are normalized to peak theoretical TFLOPS; effective training throughput is typically 30–50% of peak (the FLOP Engine applies this adjustment).