Scrutica
76 terms across 8 domains. Technical definitions, cross-references, and domain categorization for the compute governance vocabulary.
76 terms spanning semiconductor engineering and AI governance policy. Each entry includes a precise technical definition and an explanation of why the term matters for policymakers, legislators, and governance researchers.
76 terms
Extreme ultraviolet lithography uses 13.5 nm wavelength light to pattern features below 7 nm on silicon wafers. Requires a tin-droplet plasma source, multilayer mirrors, and vacuum-sealed optical path.
Related: DUV lithography · Photomask · Process node
Governance relevance
ASML is the sole manufacturer of EUV machines. Any disruption to ASML or its supply chain halts production of the most advanced AI chips worldwide.
ASML holds a complete monopoly on EUV lithography equipment.See on page
Related: DUV lithography · Photomask · Process node
Deep ultraviolet lithography uses 193 nm (ArF) or 248 nm (KrF) wavelength light. Multi-patterning techniques extend DUV to ~7 nm nodes at the cost of additional process steps and yield loss.
Related: EUV lithography · Multi-patterning · Process node
Governance relevance
DUV machines are manufactured by ASML, Nikon, and Canon. China can access DUV (not EUV) equipment, which supports production at mature nodes but not cutting-edge AI chips.
Related: EUV lithography · Multi-patterning · Process node
A named manufacturing generation (e.g., N4, N5, N7) indicating transistor density and power characteristics. Modern node names are marketing labels; actual gate lengths differ from the stated nanometer value.
Related: EUV lithography · FinFET · GAA
Governance relevance
Export controls target chips manufactured at advanced nodes (roughly 14 nm and below). The node designation determines whether a chip is freely exportable or restricted.
TSMC N4 achieves ~1.7x logic density over N7.See on page
Related: EUV lithography · FinFET · GAA
A thin disc of crystalline silicon (typically 300 mm diameter) on which hundreds of chip dies are patterned simultaneously. Wafer starts per month is the standard measure of fab capacity.
Related: Die · Yield · Foundry
Governance relevance
Global advanced-node wafer capacity is concentrated at TSMC (~92% market share for sub-7nm). Wafer allocation decisions by a single company determine which AI chips get manufactured.
Related: Die · Yield · Foundry
The fraction of functional dies per wafer. Advanced-node yields typically start at 30-50% and improve to 80%+ over 12-18 months of production maturity.
Related: Wafer · Die · Process node
Governance relevance
Low yields at new process nodes constrain chip supply regardless of wafer capacity. Yield data is closely guarded; its absence is itself a signal of production difficulty.
Related: Wafer · Die · Process node
Chip-on-Wafer-on-Substrate: TSMC's 2.5D advanced packaging technology that places multiple chiplets and HBM stacks on a silicon interposer. Critical bottleneck for AI accelerators.
Related: HBM · Interposer · Advanced packaging
Governance relevance
CoWoS packaging capacity constrained AI chip supply through 2024-2025. TSMC controls nearly all advanced packaging for AI accelerators; alternative capacity (Samsung I-Cube) remains limited.
Every H100 and A100 GPU uses CoWoS packaging.See on page
Related: HBM · Interposer · Advanced packaging
High Bandwidth Memory: vertically stacked DRAM dies connected by through-silicon vias (TSVs). HBM3e provides ~4.8 TB/s bandwidth per stack. Required for modern AI accelerators.
Related: CoWoS · Memory bandwidth · SK Hynix
Governance relevance
SK Hynix (~56%), Samsung (~25%), and Micron (~19%) produce all HBM (Q3 2025, Astute Group (interpolated from Q2 2025 62% baseline + Samsung HBM3E qualification shift)). HBM supply is a binding constraint on AI accelerator production, independent of logic chip availability.
H100 SXM5 ships with 80GB HBM3; H100 PCIe uses HBM2e. H200 SXM with 141GB HBM3e.See on page
Related: CoWoS · Memory bandwidth · SK Hynix
A semiconductor manufacturer that fabricates chips designed by other companies (fabless firms). TSMC, Samsung Foundry, and GlobalFoundries are the major foundries.
Related: Fabless · IDM · Wafer
Governance relevance
The foundry model concentrates manufacturing in a small number of companies and geographies. Taiwan hosts >60% of global advanced-node foundry capacity.
Related: Fabless · IDM · Wafer
A chip company that designs but does not manufacture its own chips. NVIDIA, AMD, Qualcomm, and Apple are fabless; they rely on foundries for production.
Related: Foundry · IDM
Governance relevance
Fabless companies depend entirely on foundry access. Export controls on foundry services can restrict a fabless company's ability to produce chips even if the design itself is unrestricted.
Related: Foundry · IDM
Integrated Device Manufacturer: a company that both designs and fabricates its own chips. Intel, Samsung, and Texas Instruments are IDMs.
Related: Foundry · Fabless
Governance relevance
IDMs have more supply chain resilience than fabless companies but face higher capital requirements. The US CHIPS Act subsidies primarily target IDM-style domestic manufacturing.
Related: Foundry · Fabless
Outsourced Semiconductor Assembly and Test: companies that package and test fabricated chips. ASE, Amkor, and JCET are major OSATs.
Related: CoWoS · Advanced packaging
Governance relevance
OSATs are concentrated in Taiwan, China, and Malaysia. Advanced packaging (required for AI chips) is more concentrated than standard OSAT services.
Related: CoWoS · Advanced packaging
Fin Field-Effect Transistor: a 3D transistor architecture where the gate wraps around a raised "fin" of silicon. Used at 14nm through 3nm nodes.
Related: GAA · Process node
Governance relevance
The performance gains from FinFET architecture underpin every modern AI accelerator. The transition to GAA transistors at 2nm creates a new manufacturing chokepoint.
Related: GAA · Process node
Gate-All-Around transistor: the successor to FinFET where the gate completely surrounds the channel. Samsung and TSMC are transitioning to GAA at 2nm and below.
Related: FinFET · Process node
Governance relevance
GAA manufacturing requires new equipment and process expertise. Countries and companies without access to GAA technology face a capability gap in AI chip production that grows with each successive node.
Related: FinFET · Process node
A quartz plate with patterned chrome features that defines the circuit layout projected onto the wafer during lithography. A single advanced chip design requires 80-100 mask layers.
Related: EUV lithography · DUV lithography
Governance relevance
Photomask sets for advanced nodes cost $10-30M and take months to produce. This cost barrier limits who can design cutting-edge chips.
Related: EUV lithography · DUV lithography
Post-fabrication techniques (CoWoS, InFO, EMIB, Foveros) that integrate multiple chiplets and memory stacks into a single package. Enables the large-die configurations required for AI accelerators.
Related: CoWoS · HBM · Chiplet
Governance relevance
Advanced packaging is the newest bottleneck in AI chip supply. Capacity is more concentrated than wafer fabrication, with TSMC controlling the majority of AI-relevant packaging.
Related: CoWoS · HBM · Chiplet
Floating-point operation: a single arithmetic operation (add, multiply) on a floating-point number. FLOP/s (per second) measures raw compute throughput; total FLOP measures cumulative compute used for training.
Related: TFLOP/s · Training compute · FP16
Governance relevance
FLOP is the primary metric for AI governance thresholds. The EU AI Act requires reporting for models trained above 10^25 FLOP; the former US Executive Order 14110 set notification at 10^26 FLOP.
GPT-4 training compute is estimated (Tier 3) at approximately 2 x 10^25 FLOP.See on page
Related: TFLOP/s · Training compute · FP16
Tera (10^12) floating-point operations per second. The standard unit for GPU compute throughput. H100 SXM: 989.5 TFLOP/s FP16 dense, 1,979 TFLOP/s with 2:4 structured sparsity.
Related: FLOP · FP16 · BF16
Governance relevance
TFLOP/s ratings determine how quickly a facility can accumulate training compute. Higher TFLOP/s per GPU means fewer GPUs are needed to reach regulatory FLOP thresholds.
Related: FLOP · FP16 · BF16
Half-precision 16-bit floating-point format. Most AI training uses FP16 or BF16 arithmetic; FLOP/s benchmarks for AI typically report FP16 throughput.
Related: BF16 · FP8 · TFLOP/s
Governance relevance
The precision format determines the effective compute per operation. Export controls reference total operations per second, which varies by precision mode.
Related: BF16 · FP8 · TFLOP/s
Brain floating-point 16-bit format: same total bits as FP16 but with 8 exponent bits (matching FP32) and 7 mantissa bits. Preferred for training stability.
Related: FP16 · FP8
Governance relevance
BF16 and FP16 deliver comparable throughput on modern GPUs. Governance thresholds typically count operations regardless of precision format.
Related: FP16 · FP8
Thermal Design Power: the maximum sustained power draw of a chip in watts. H100 SXM: 700W; H100 PCIe: 350W; H200 SXM: 700W; B200: 1,000W.
Related: PUE · GPU cluster
Governance relevance
TDP determines data center power requirements and cooling needs. A facility's total power capacity divided by per-GPU TDP provides an upper bound on GPU count, which Scrutica uses as one estimation path for facility-level compute.
Power estimation path: 425 MW facility / 700W per H100 = ~607K GPUs maximum.See on page
Related: PUE · GPU cluster
NVIDIA's proprietary high-bandwidth GPU-to-GPU interconnect. NVLink 4.0 (Hopper) provides 900 GB/s bidirectional bandwidth; NVLink 5.0 (Blackwell) provides 1,800 GB/s.
Related: InfiniBand · GPU cluster · Interconnect fabric
Governance relevance
NVLink is a sole-source technology from NVIDIA. Clusters using NVLink achieve higher training efficiency than those using standard PCIe or InfiniBand for GPU communication.
Related: InfiniBand · GPU cluster · Interconnect fabric
High-bandwidth, low-latency networking fabric used to connect GPU nodes in AI training clusters. NVIDIA (via Mellanox acquisition) dominates the AI-grade InfiniBand market.
Related: NVLink · Interconnect fabric · GPU cluster
Governance relevance
InfiniBand is a controlled technology under US export rules. Restricting InfiniBand access limits a country's ability to build large-scale AI training clusters, even if GPUs are available.
Related: NVLink · Interconnect fabric · GPU cluster
Specialized matrix-multiply units in NVIDIA GPUs that perform mixed-precision fused multiply-add operations. Responsible for the majority of AI training throughput.
Related: TFLOP/s · FP16 · BF16
Governance relevance
Tensor core count and generation determine a GPU's AI-relevant compute capacity, which is the basis for export control performance thresholds.
Related: TFLOP/s · FP16 · BF16
The data transfer rate between GPU compute units and memory, measured in TB/s. H100 SXM: 3.35 TB/s; H200 SXM: 4.8 TB/s (with HBM3e).
Related: HBM · TFLOP/s
Governance relevance
Memory bandwidth is a binding constraint for many AI workloads. Export control thresholds consider both compute throughput and memory bandwidth as dual criteria.
Related: HBM · TFLOP/s
NVIDIA's server-grade GPU form factor with direct NVLink connectivity. SXM variants (H100 SXM, B200 SXM) offer higher power limits and bandwidth than PCIe equivalents.
Related: PCIe · NVLink · TDP
Governance relevance
SXM GPUs are the building blocks of large AI training clusters. The SXM vs. PCIe distinction matters for export controls because SXM enables cluster-scale training that PCIe cannot.
Related: PCIe · NVLink · TDP
PCI Express: the standard expansion bus interface. PCIe GPUs (e.g., H100 PCIe) have lower TDP (350W vs. 700W) and no NVLink, limiting large-scale training capability.
Related: SXM · NVLink
Governance relevance
Some export control frameworks distinguish between SXM and PCIe variants. PCIe GPUs are less useful for frontier AI training, so export restrictions may treat them differently.
Related: SXM · NVLink
The total floating-point operations used to train a model, measured in FLOP. Distinct from inference compute (running a trained model) and fine-tuning compute.
Related: FLOP · Inference · Scaling law
Governance relevance
Training compute is the primary measurable input for AI capability governance. Higher training compute correlates with more capable (and potentially more dangerous) models.
The EU AI Act uses 10^25 total training FLOP as the GPAI reporting threshold.See on page
Related: FLOP · Inference · Scaling law
Running a trained model to produce outputs (predictions, text, images). Inference compute per query is orders of magnitude smaller than total training compute, but aggregate inference demand can exceed training demand.
Related: Training compute · FLOP
Governance relevance
Inference compute is not currently subject to governance thresholds, but aggregate inference demand drives data center capacity expansion and energy consumption.
Related: Training compute · FLOP
Empirical power-law relationship between training compute, dataset size, model parameters, and model performance. Kaplan et al. (2020) and Hoffmann et al. (2022, "Chinchilla") established key scaling relationships.
Related: Training compute · Chinchilla-optimal · Parameter count
Governance relevance
Scaling laws predict that spending more compute on training yields more capable models. This predictability enables governance frameworks that use compute thresholds as capability proxies.
Related: Training compute · Chinchilla-optimal · Parameter count
A training configuration that balances model size and dataset size according to the Hoffmann et al. (2022) scaling law. Chinchilla-optimal models use ~20 tokens per parameter.
Related: Scaling law · Training compute · Parameter count
Governance relevance
Chinchilla-optimal training requires less compute than over-parameterized training for the same capability. This complicates governance: a FLOP threshold can be reached with a smaller, better-trained model.
Related: Scaling law · Training compute · Parameter count
The number of trainable weights in a neural network. GPT-4 is widely reported (Tier 3, analyst estimates; OpenAI has not confirmed) at ~1.8 trillion parameters (MoE). Larger parameter counts require more memory and compute.
Related: Training compute · Scaling law · FLOP
Governance relevance
Parameter count was an early proxy for model capability but is less informative than training compute. Current governance frameworks use FLOP thresholds rather than parameter thresholds.
Related: Training compute · Scaling law · FLOP
Model FLOP Utilization: the fraction of theoretical peak GPU throughput actually achieved during training. Typical values: 30-50% for large clusters. Higher MFU means more efficient hardware utilization.
Related: Training compute · GPU cluster · Interconnect fabric
Governance relevance
MFU determines how much real compute a facility extracts from its hardware. Two identical GPU clusters can differ 2x in effective training compute based on MFU alone.
Scrutica's hardware estimation path uses MFU = 40% as the default assumption.See on page
Related: Training compute · GPU cluster · Interconnect fabric
Continued training of a pre-trained model on a smaller, task-specific dataset. Uses 100-10,000x less compute than the original pre-training run.
Related: Training compute · Inference
Governance relevance
Fine-tuning can significantly alter a model's capabilities (including safety-relevant ones) at a fraction of the original training cost. Current governance frameworks do not consistently cover fine-tuning compute.
Related: Training compute · Inference
Model Evaluation and Threat Research benchmark measuring how long an AI agent can autonomously work on real-world tasks. Scored on a 0-5 scale corresponding to task durations from seconds to weeks.
Related: Training compute · FLOP
Governance relevance
METR scores provide a standardized measure of autonomous capability. Higher METR scores at lower compute levels would indicate capability is becoming more accessible, a key governance concern.
Frontier models in 2025 achieve METR time horizons of 2-3 (minutes to hours).See on page
Related: Training compute · FLOP
Power Usage Effectiveness: ratio of total facility power to IT equipment power. PUE 1.0 = all power goes to compute; PUE 1.2 = 20% overhead for cooling, lighting, networking. Industry average: ~1.3.
Related: TDP · GPU cluster · Liquid cooling
Governance relevance
PUE determines how much of a facility's total power actually reaches GPUs. A lower PUE means more effective compute per megawatt, which feeds directly into facility-level FLOP estimates.
Scrutica's power estimation path uses PUE to convert total facility power to GPU power.See on page
Related: TDP · GPU cluster · Liquid cooling
Megawatt: 1 million watts. The standard measure of data center power capacity. A 100 MW facility can host roughly 100,000-140,000 H100 GPUs at full load (accounting for PUE).
Related: PUE · TDP · Utilization rate
Governance relevance
Power capacity is often the binding constraint on AI compute. A 500 MW data center campus represents a significant national infrastructure investment and a measurable concentration of AI capability.
The xAI Colossus facility in Memphis draws approximately 150 MW.See on page
Related: PUE · TDP · Utilization rate
Direct-to-chip or immersion cooling that removes heat more efficiently than air cooling. Required for GPUs above ~400W TDP (Blackwell B200 at 1,000W mandates liquid cooling).
Related: PUE · TDP
Governance relevance
Liquid cooling infrastructure is a prerequisite for next-generation AI clusters. Facilities without it cannot host the latest accelerators; the result is a capacity gap between upgraded and legacy sites.
Related: PUE · TDP
A set of GPU servers connected by high-bandwidth networking (NVLink + InfiniBand) that can collectively train a single model. Modern frontier clusters contain 10,000-100,000+ GPUs.
Related: NVLink · InfiniBand · MFU
Governance relevance
GPU cluster scale determines what size models a facility can train. Clusters above certain sizes can produce models that exceed governance thresholds.
Related: NVLink · InfiniBand · MFU
The fraction of installed GPU capacity actively performing compute, averaged over time. Typical values: 50-80% for training workloads. Affected by scheduling, maintenance, and workload mix.
Related: GPU cluster · MFU
Governance relevance
Utilization determines how much of a facility's theoretical capacity is realized in practice. Low utilization means installed capacity overstates actual compute output.
Related: GPU cluster · MFU
The network infrastructure connecting GPU nodes within a cluster. Combines NVLink (intra-node), InfiniBand or Ethernet (inter-node), and spine-leaf topology for scalable bandwidth.
Related: NVLink · InfiniBand · GPU cluster
Governance relevance
Interconnect quality determines maximum trainable model size. Without adequate interconnect, additional GPUs cannot participate in a single training run, so raw GPU count overstates effective cluster scale.
Related: NVLink · InfiniBand · GPU cluster
GPU cloud providers (CoreWeave, Lambda, Nebius, Applied Digital, IREN) that rent compute capacity without the broad cloud services offered by hyperscalers (AWS, GCP, Azure).
Related: GPU cluster · Sole-source dependency
Governance relevance
Neoclouds have opened AI-specific compute capacity outside hyperscaler ecosystems, but their near-total dependency on NVIDIA GPUs replicates the concentration dynamic at a different layer.
Related: GPU cluster · Sole-source dependency
Cloud providers operating at massive scale: Amazon (AWS), Microsoft (Azure), Google (GCP), Meta, Oracle. Characterized by >100 MW data center campuses, custom silicon programs, and global infrastructure.
Related: Neocloud · GPU cluster · MW
Governance relevance
Hyperscalers control the majority of commercially accessible AI compute. Their infrastructure investment decisions (where to build, what chips to buy) shape national compute capacity.
Related: Neocloud · GPU cluster · MW
A supply chain relationship where only one company produces a critical input. ASML for EUV machines, TSMC for advanced-node foundry services, and Zeiss for EUV optics are sole-source dependencies.
Related: Chokepoint · Supply chain edge
Governance relevance
Sole-source dependencies are the most concentrated chokepoints in the compute supply chain, and therefore the points where export controls and policy interventions have maximum effect. No alternative supplier exists at any price.
The concentration page identifies supply chain steps with sole-source exposure.See on page
Related: Chokepoint · Supply chain edge
A directed relationship between two entities in the supply chain graph. Edges carry attributes: relationship type (supplier/customer/partner), value ($M), and source (FactSet, SEC filing).
Related: Supply chain centrality · FactSet Revere
Governance relevance
The supply chain graph surfaces hidden dependencies and cascade paths. Two companies that appear unrelated may share a critical upstream supplier, creating correlated exposure invisible from a bilateral view.
Related: Supply chain centrality · FactSet Revere
Herfindahl-Hirschman Index: the sum of squared market shares. Ranges from near 0 (perfect competition) to 10,000 (monopoly). Calculated as HHI = sum(s_i^2) where s_i is each firm's percentage market share.
Related: Sole-source dependency · Chokepoint
Governance relevance
The US Department of Justice considers HHI above 2,500 as highly concentrated. Several AI compute supply chain segments (EUV lithography, advanced packaging, HBM) have HHI values above 5,000.
EUV lithography HHI = 10,000 (ASML monopoly).See on page
Related: Sole-source dependency · Chokepoint
A supply chain node where concentration or sole-source dependency creates a governance-relevant bottleneck. Measured by HHI, node centrality, and substitutability assessment.
Related: Sole-source dependency · HHI · Cascade
Governance relevance
Chokepoints are where export controls have maximum leverage and where disruptions have maximum impact. They serve as the natural targets for risk assessment and strategic policy intervention.
Related: Sole-source dependency · HHI · Cascade
The propagation of a disruption through supply chain relationships. Modeled via breadth-first search across the relationship graph with decay factors based on supply share and criticality. Criticality can be editorial (1-10 replaceability scale) or blended with FactSet 3-month price correlation where available. Blend formula: adjustedCriticality = w × 5(1+r) + (1-w) × editorial.
Related: Chokepoint · Supply chain edge · BFS
Governance relevance
Cascade analysis surfaces the second- and third-order effects of concentration in the supply chain. Removing or constraining a single node may reshape access for dozens of downstream entities; the propagation depth indicates how much policy leverage a given chokepoint provides.
The cascade simulator models dependency propagation across 18,500+ supply chain edges.See on page
Related: Chokepoint · Supply chain edge · BFS
FactSet's proprietary supply chain relationship database. Contains bilateral supplier-customer links with relationship value ($M), relationship rank, and 3-month rolling stock price correlation.
Related: Supply chain edge · Price correlation
Governance relevance
FactSet Revere is the institutional-grade data source used by financial analysts to map supply chain dependencies. Scrutica uses it as the primary source for supply chain edges.
Related: Supply chain edge · Price correlation
The 3-month rolling Pearson correlation between two companies' stock prices, from FactSet Revere. Available on ~78% of FactSet edges. Used to modulate cascade criticality scores when market correlation weighting is enabled. Maps r to the [1,10] criticality scale via 5×(1+r) and blends with editorial scores at a configurable weight (default 0.5). WRDS edges and unlisted companies fall back to editorial criticality.
Related: Cascade · FactSet Revere · Supply chain edge
Governance relevance
Incorporated into the cascade simulator as an empirical edge weight. High positive correlation confirms that the market sees tight coupling; negative correlation suggests the supply relationship is not the primary driver of either company's stock price. The blend avoids over-relying on either signal.
Related: Cascade · FactSet Revere · Supply chain edge
Expert-assessed score (0-1) indicating how easily a disrupted supplier can be replaced. 0 = no substitute exists (ASML for EUV); 1 = drop-in replacements available. Authority tier 3 (analyst assessment).
Related: Cascade · Decay rate · Chokepoint
Governance relevance
Substitutability determines cascade severity. Low substitutability at a chokepoint means disruption propagates with minimal attenuation.
Related: Cascade · Decay rate · Chokepoint
Export Administration Regulations: the US legal framework governing export of dual-use items. Administered by BIS (Bureau of Industry and Security). Key classifications: 3A090 (advanced computing ICs), 4A090 (computers containing 3A090 chips).
Related: Entity List · License exception · BIS
Governance relevance
The EAR is the primary tool the US uses to restrict access to advanced AI chips. It determines which chips require licenses for which destinations, and which are prohibited entirely.
Related: Entity List · License exception · BIS
A BIS-maintained list of foreign entities subject to specific export restrictions. Inclusion requires a license for most items subject to the EAR, with a presumption of denial for advanced computing.
Related: EAR · BIS · License exception
Governance relevance
Entity List designation is the strongest export control tool short of a full embargo. It can cut a company off from the US technology supply chain entirely.
Scrutica cross-references BIS Entity List entries against the supply chain graph.See on page
Related: EAR · BIS · License exception
Bureau of Industry and Security: the US Department of Commerce agency that administers export controls. Issues licenses, maintains the Entity List, and promulgates rules under the EAR.
Related: EAR · Entity List
Governance relevance
BIS is the institutional center of US technology export control policy. Its rule-making actions directly determine which countries and companies can access advanced AI chips.
Related: EAR · Entity List
An authorization under the EAR that permits export without an individual license when specified conditions are met. Examples: License Exception TSR (technology/software under restriction), ACE (authorized cybersecurity exports).
Related: EAR · BIS
Governance relevance
License exceptions create controlled pathways for technology transfer. Their scope and conditions are frequently modified; changes to license exceptions can reclassify thousands of transactions overnight.
Related: EAR · BIS
Transfer of controlled technology to a foreign national within the United States. Governed by the same EAR classifications as physical exports.
Related: EAR · BIS
Governance relevance
Deemed export rules affect AI research collaboration. A US university lab with foreign researchers may need export licenses for work involving controlled computing technology.
Related: EAR · BIS
A multilateral export control regime with 42 participating states. Sets baseline control lists for conventional arms and dual-use technologies, including semiconductor equipment and advanced computing.
Related: EAR · End-use controls
Governance relevance
Wassenaar provides the multilateral framework that US, EU, and Japanese export controls build on. Countries outside Wassenaar may serve as transshipment points for controlled technology.
The coordination gaps page identifies countries in the supply chain but outside Wassenaar.See on page
Related: EAR · End-use controls
Export restrictions based on the intended application rather than the item's technical specifications. Applied when there is knowledge or reason to believe the item will be used for a prohibited purpose.
Related: EAR · Entity List
Governance relevance
End-use controls are harder to enforce than technical threshold controls because they require knowledge of the buyer's intentions. They are the primary tool for restricting mature-node chip exports.
Related: EAR · Entity List
EAR provision that exempts foreign-made items containing less than a specified percentage (typically 25%) of controlled US-origin content. Threshold varies by destination and item.
Related: EAR · Entity List
Governance relevance
The de minimis rule determines how far US export controls extend into foreign supply chains. A lower de minimis threshold (e.g., 10% for Entity List destinations) extends US regulatory reach.
Related: EAR · Entity List
Tera operations per second: the performance metric used in US export control rules to determine whether a chip is "advanced." The October 2022 rule set thresholds at 300 TOPS (for certain interconnect bandwidth) and 600 TOPS.
Related: EAR · TFLOP/s
Governance relevance
TOPS thresholds are the technical criteria that divide freely exportable chips from restricted ones. Chip designers may deliberately design below threshold to avoid export controls.
Related: EAR · TFLOP/s
General-purpose AI: the category defined by the EU AI Act for models trained on broad data that can perform a wide range of tasks. Models trained above 10^25 FLOP are classified as GPAI with systemic risk.
Related: FLOP · Training compute · EU AI Act
Governance relevance
GPAI classification triggers transparency and reporting obligations under the EU AI Act. The 10^25 FLOP threshold determines which models face the most stringent requirements.
Related: FLOP · Training compute · EU AI Act
Under the EU AI Act, the risk classification for GPAI models with "high-impact capabilities" (currently proxied by training compute above 10^25 FLOP). Triggers additional obligations: red-teaming, incident monitoring, cybersecurity.
Related: GPAI · FLOP · EU AI Act
Governance relevance
Systemic risk classification is the strictest tier of EU AI regulation. Whether a model qualifies depends on its training compute, which depends on the facility's compute capacity.
Related: GPAI · FLOP · EU AI Act
Regulation (EU) 2024/1689: the European Union's comprehensive AI regulation. Establishes risk-based classification, GPAI transparency requirements, and compute thresholds for systemic risk designation.
Related: GPAI · Systemic risk · FLOP
Governance relevance
The EU AI Act is the first major jurisdiction to use training compute as a regulatory threshold. Its 10^25 FLOP line directly connects facility-level compute capacity to regulatory compliance.
Scrutica's FLOP Compliance Calculator checks estimates against EU AI Act thresholds.See on page
Related: GPAI · Systemic risk · FLOP
The use of compute as a governance lever for AI policy. Based on the premise that compute is measurable, excludable, and concentrated — properties that make it a tractable intervention point.
Related: FLOP · Training compute · GPAI
Governance relevance
Compute governance is Scrutica's core domain. The platform provides the data infrastructure that compute governance requires: who has how much compute, where, sourced from whom, under what regulatory constraints.
Related: FLOP · Training compute · GPAI
Technology with both civilian and military applications. All advanced AI chips are dual-use; the same GPU that trains a language model can train a military targeting system.
Related: EAR · Wassenaar Arrangement
Governance relevance
The dual-use nature of AI chips is the legal foundation for export controls. Wassenaar, EAR, and EU dual-use regulations all regulate AI-relevant technology on dual-use grounds.
Related: EAR · Wassenaar Arrangement
In the AI governance context: the spread of advanced AI capabilities to additional actors. Distinct from weapons proliferation but often analyzed with similar frameworks (supply-side controls, demand-side monitoring).
Related: Dual-use · Compute governance
Governance relevance
Compute proliferation tracking (who is acquiring what compute capability, and where) is one of Scrutica's core use cases. Export controls are the primary anti-proliferation tool.
Related: Dual-use · Compute governance
National programs to develop domestic AI compute capacity, independent of foreign hyperscalers. Includes government-funded GPU clusters, national cloud initiatives, and indigenous chip development.
Related: Compute governance · GPU cluster
Governance relevance
Two questions define each sovereign program: execution (what fraction of announced spending reaches operational infrastructure) and supply chain exposure (what fraction of the program's hardware falls under allied jurisdictional authority via export controls, FDPR, or entity list designations).
Scrutica tracks 30+ sovereign AI programs with separate government and private investment figures.See on page
Related: Compute governance · GPU cluster
Scrutica metric: the ratio of deployed/disbursed spending to announced/committed spending for sovereign AI programs. Reality ratio 0.1 means 10% of announcements have reached operational deployment.
Related: Sovereign AI
Governance relevance
The reality ratio measures execution: what fraction of a sovereign AI commitment has reached operational deployment. A country with $10B announced and 5% reality ratio has $500M of actual compute. The complementary metric is allied governance reach, which measures supply chain exposure to allied export control jurisdiction.
The sovereign AI dashboard tracks reality ratios for every country with a national AI program.See on page
Related: Sovereign AI
A range of values within which the true value is estimated to fall with a stated probability (typically 90%). Scrutica reports estimation bounds rather than point estimates where multiple estimation paths are available.
Related: Estimation bounds · Authority tier
Governance relevance
Confidence intervals communicate honest uncertainty. A wide interval (e.g., 50-200 PFLOP/s) tells a policymaker the estimate is uncertain; a narrow one (e.g., 80-100 PFLOP/s) supports stronger conclusions.
Related: Estimation bounds · Authority tier
The documented origin and transformation history of a data point. Every Scrutica value carries data_source, source_url, and is_estimated fields.
Related: Authority tier · is_estimated
Governance relevance
Provenance enables citation and verification. A policy researcher can trace any number on the platform back to its original source and assess whether to trust it.
Related: Authority tier · is_estimated
A boolean flag on every Scrutica data point. True if the value is derived or inferred; false only if it comes from a primary source with direct measurement.
Related: Authority tier · Provenance
Governance relevance
The is_estimated flag distinguishes verified facts from modeled estimates. Policymakers should cite estimated values with appropriate caveats.
Related: Authority tier · Provenance
The range produced by cross-validating multiple estimation paths. When hardware, power, and cost paths agree within 15%, confidence is high; when they diverge, the bounds are wide.
Related: Confidence interval · Authority tier
Governance relevance
Estimation bounds are Scrutica's mechanism for honest uncertainty. They prevent the false precision that undermines trust in data-driven governance tools.
The FLOP engine shows cross-validated estimation bounds for each facility.See on page
Related: Confidence interval · Authority tier
Breadth-first search: a graph traversal algorithm used in the cascade simulator to model disruption spreading hop-by-hop through the supply chain. Each hop applies a decay factor based on edge substitutability.
Related: Cascade · Supply chain edge
Governance relevance
BFS propagation models how a disruption at one point in the supply chain ripples outward. The number of hops before the impact becomes negligible indicates how resilient the supply chain is to that disruption.
Related: Cascade · Supply chain edge
The per-hop reduction in disruption impact during cascade propagation. Calculated as 1 minus substitutability score, clamped at 0.8. Higher decay = faster attenuation = more resilient supply chain.
Related: BFS propagation · Cascade
Governance relevance
Decay rates encode how easily a disrupted supplier can be replaced. A low decay rate (hard to substitute) means the disruption cascades further before dissipating.
Related: BFS propagation · Cascade
Decomposing a company's total capital expenditure into components: land, buildings, power infrastructure, networking, GPU/accelerator hardware, and other IT equipment.
Related: Estimation bounds · FLOP
Governance relevance
Capex disaggregation enables the cost-based estimation path for facility-level compute. When a company reports $5B in data center capex, disaggregation estimates how many GPUs that purchases.
Related: Estimation bounds · FLOP
One petaFLOP (10^15 FLOP/s) sustained for one day (86,400 seconds) = 8.64 x 10^19 FLOP. Used as the unit for the Compute Cost Index.
Related: FLOP · TFLOP/s
Governance relevance
Petaflop-days normalize compute across different hardware and providers so that cost comparisons are meaningful. The $/petaFLOP-day metric shows how much it costs to train AI in different jurisdictions.
The cost index reports compute pricing in $/petaFLOP-day across cloud providers and regions.See on page
Related: FLOP · TFLOP/s