Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 1 to 10.
Frontier AI – highly capable, general-purpose systems – has catalysed calls for “compute caps,” tiered thresholds that condition or limit access to training resources. The impetus is prudential: if claims about development and use cannot be verified, rivalry may spiral into an arms-race dynamic. Here, verification means attesting to training properties, safety testing, and deployment footprints without divulging proprietary artefacts. [I] If routine, trusted attestation emerged, it could temper escalation, enable fairer diffusion of benefits, and normalise accountable practices across jurisdictions.
Proponents argue that compute is a tractable intervention point. It is necessary (frontier training is compute-hungry), detectable (resource-intensive clusters), excludable (physical and licensable), quantifiable (operations, memory, interconnects), and concentrated (few firms control cutting-edge chips and hyperscale facilities). This concentration reduces the number of gatekeepers a verification regime must enlist. [II] On this view, compute caps operationalise verification: calibrated thresholds trigger disclosures, licences, or denials, aligning incentives for compliance while retaining room for research and small-scale experimentation.
A complementary design is compute-based reporting: model developers pre-notify a public authority before large training runs; compute providers verify notification before provisioning; and cryptographic or hardware attestations log usage. If verification remains patchy and parochial, mutual suspicion will metastasize and erode restraint. [III] Hardware-enabled mechanisms (tamper-evident power/bandwidth monitors, enclave-based attestations) could verify properties of training or deployment without exposing model internals, creating auditable footprints that make caps enforceable and proportionate.
Challenges persist. The transparency-security trade-off is acute: revealing locations or capacities may leak sensitive signals. Mitigations include confidential computing, multilateral audits to rule out backdoors, and neutral data centres jointly secured by rival parties. Retrofitted mechanisms can help in the near term; next-generation chips might embed verifiability by design. [IV] Meanwhile, adversaries could route around controls via alternative jurisdictions, so any cap-and-verify architecture must prioritise interoperability, supply-chain integrity, and credible, cross-border enforcement.
(Adapted from United Nations Secretary-General’s Scientific Advisory Board, “Verification of Frontier AI,” June 2025)
Question 1. According to paragraph 2, compute is deemed “excludable” because ______.
A. licensing frameworks are universally harmonised across all major geopolitical blocs today
B. software algorithms remain inherently opaque and thus cannot be independently audited
C. its physical nature allows access to be restricted through hardware control and policy
D. small developer collectives can always circumvent controls by pooling dispersed resources


