The library in town closes at six. The girl arrives at 5:47 with a backpack that looks like it survived something. She has thirteen minutes to finish the thing that’s due at midnight—not an essay, which she could write on her phone in the parking lot, but a model that needs a GPU. The assignment assumes she has one. The teacher who wrote it probably has three.
She opens the school’s cloud account and sees the same thing she saw yesterday: Queue position: 1,847. Estimated wait: 11 hours. She refreshes. The number gets worse. Someone with a research grant just submitted a job that will run until Thursday. She closes the laptop. She’ll take the late penalty. She’s gotten good at taking the late penalty.1
This is not a story about a girl who needs help. This is a story about a gate that shouldn’t exist.
Three miles away, a clinic is trying to fine-tune a triage model on patient records that never leave the building. The attending physician wrote the code herself, scavenged the compute credits from a colleague who owed her a favor, and ran the job at 3 a.m. when the queue was shorter. It worked. The model is faster than she is at spotting sepsis in the first six hours. She wants to share it with the county network—eight clinics, all underfunded, all seeing the same patients rotate through different doors.
But she can’t. Because the model was trained on borrowed credits, she doesn’t own the weights. Because she doesn’t own the weights, she can’t publish them. Because she can’t publish them, the other clinics will reinvent the same wheel, or more likely, they won’t, and patients will continue to get sicker while they wait for tests that a model could have ordered five hours earlier.2
Ten miles north, a city planner is trying to simulate flood risk under three infrastructure scenarios. The model is open-source. The data is public. The compute is not. He submitted the job to a university cluster six weeks ago. It’s still in the queue. Meanwhile, the city council is voting on a $40 million bond based on a consultant’s report that used a simpler model and older data, because the consultant had access and he didn’t.3
These are not edge cases. This is the median experience of trying to do technical work without institutional access to compute. And the gap is widening.
The Thesis, Stated Plainly
Compute is now a civic baseline—not a luxury, not a perk, but a precondition for participation. We reserve the word “rights” for tools that let people learn, speak, move, and connect. Compute now sits alongside them. Not because it is glamorous, but because it is ordinary: homework that needs a GPU, a clinic that needs a model, a small firm that needs to search contracts, a city office that needs to simulate traffic, heat, and flood.4
The claim is simple: every person should have a baseline claim on compute, with scaled claims for public-interest institutions. Not coupons. Not charity. Not a lottery. A floor.
The plan is equally plain: federate existing capacity, set neutral scheduling rules, fund growth with narrow and durable mechanisms, and post the dials where everyone can see them. This is not science fiction. The infrastructure mostly exists. What’s missing is the constitutional commitment to run it as a public option rather than a private favor.
The justification is empirical, not romantic. Concentration already happened. A handful of hyperscalers own most of the world’s accessible compute. Access is radically uneven—not just between rich and poor countries, but between individuals within the same city, the same school, sometimes the same household. Energy is binding; every new data center competes with neighborhoods for power and water. And queues are policy, whether we admit it or not. The order in which jobs run, who gets bumped, who waits forever—these are political decisions dressed up as technical defaults.5
We have precedents. Libraries gave us shared access to books when books were scarce and expensive. Universal-service broadband subsidies recognized that connection is infrastructure. Shared supercomputers at national labs proved that pooled compute can serve research at scale. Open-science commons like the European Open Science Cloud demonstrate that federated capacity can work across borders and institutions.6 We are not starting from zero. We are starting from a working prototype that was never given the resources or mandate to become universal.
The Right, Sketched Tightly
A right worth having must be operable. The baseline entitlement is enough compute—and the storage, models, and datasets to use it—to learn, research, create, and participate in civic life. What “enough” means will shift as norms do, but today a defensible floor might be: 50 GPU-hours per month (normalized to a standard performance baseline, since an H200 hour differs from an L4 hour), 200 CPU-hours, 100GB of persistent storage, standard egress, and access to curated datasets and pre-trained models that don’t require a data-use agreement written by lawyers.7
Institutions with public missions receive scaled entitlements tied to need and accountability: schools, libraries, clinics, local governments, accredited labs, and nonprofits with transparent governance. The multiplier might be 100x or 1000x depending on size and mission, but the principle is the same—capacity follows purpose, not wealth.
The scope includes accelerators (GPUs, TPUs), CPUs, memory, storage, network egress, and access to maintained software stacks that actually work. It also includes curated datasets with clear provenance and standard models with model cards that explain what they’re good for and what they’ll break. None of this matters if you have to be a systems engineer to use it.8
The limits are blunt: lawful use only. No training malware. No generating child exploitation material. No using the public option to run a for-profit service that should be buying its own capacity. Privacy and safety controls apply—isolation between users, audit logs for consequential actions, sandboxes for risky workloads. And critically, energy and water budgets are part of the right, not afterthoughts. Every job carries a carbon cost and a water cost. Those costs are visible, measured, and constrained by the same rules that govern access.9
The last constraint is not cosmetic. Data centers now consume roughly 1.5-2% of global electricity (415-460 TWh in recent IEA estimates, projected to exceed 1,000 TWh by 2026) and measurable fractions of fresh water in the regions where they’re sited.10 Treating compute as a right means treating its resource footprint as a public cost that must be measured, minimized, and made legible. The tools for this exist: Power Usage Effectiveness (PUE) for electricity efficiency, Water Usage Effectiveness (WUE) for water intensity, Energy Reuse Effectiveness (ERE) for heat recovery, and Carbon Usage Effectiveness (CUE) for emissions intensity.11 These are not aspirational. They are engineering standards that operators can measure and auditors can verify.
Design: A Compute Public Option (CPO)
Think fabric, not fortress. The goal is stitched capacity—a quilt of public clusters, federated partners, and commercial bridges that presents a single interface to users while distributing the work across whoever can do it best.12
The architecture has three layers:
The Civic Cloud is the core public infrastructure. It handles the baseline entitlement and a meaningful slice of institutional demand. It runs open APIs, shared identity, federated storage, and consistent logging. It’s the backbone, but it’s not the whole body. Think of it as the public hospitals in a healthcare system—essential, high-volume, always there, but not the only option.13
Federated partners are existing clusters that join the network: university high-performance computing centers, national laboratories, municipal data centers, and certified partitions within commercial clouds. They join via a trust framework—a set of standards for identity, scheduling, security, and reporting—that lets credentials and entitlements follow users across sites.14 You log in once. Your account, your storage, your job history, and your remaining allocation travel with you. The scheduler knows who you are and what you’re allowed to do, regardless of where the silicon lives.15
Where public capacity cannot serve in time, edge grants redeemable on certified commercial clouds bridge the gap. These are vouchers that carry the same neutrality and reporting requirements as public compute. You spend them on AWS, Google Cloud, Azure, or whoever meets the bar, but the terms are set by the public option, not the vendor. This keeps the public fabric from becoming a bottleneck while preventing commercial clouds from fragmenting access into a hundred incompatible proprietary systems.16
This is mostly stitching. Countries already operate supercomputers—the European High-Performance Computing Joint Undertaking (EuroHPC) coordinates petascale clusters across member states. The U.S. National Science Foundation’s ACCESS program provides coordinated access to dozens of university and national lab clusters. The European Open Science Cloud (EOSC) federates research data and compute across borders.17 What’s missing is the political commitment to treat this infrastructure as a right rather than a research amenity, and the funding to extend it beyond academia.
The Scheduler Is the Constitution
Rights live or die in the queue. Everything else—hardware, software, governance—is background. The queue is where the right becomes real or reveals itself as theater.18
Imagine a queue that’s legible. You can see where you are, why you’re there, and what would need to change for you to move. When someone jumps ahead, you can see the reason: emergency public health modeling, disaster response, something with stakes high enough to justify the preemption. You receive a credit—not an apology, but a tangible claim on future priority. The next time you submit, you go first. The system remembers.19
The queue becomes a kind of shared calendar of necessity. Heavy users wait longer because they’ve already consumed their share; light users move fast because the system recognizes scarcity of prior access. New users—students, researchers just starting, civic groups trying their first simulation—run immediately, because in a public option, first contact can’t require six months of institutional credibility.20
Jobs declare their flexibility: immediate (must start now), flexible within a day, very flexible, location-agnostic. The scheduler uses this to shift work to cleaner grid hours and less water-stressed regions. If your job gets moved to run at 3 a.m. when the wind is blowing, you see the carbon saved. If you’re bumped for a sustainability constraint, you earn priority credits just like emergency preemptions. The resource costs become visible and governable, not hidden externalities.21
What makes this different from current queues is not the algorithm—fair-share scheduling with decay weights has existed in high-performance computing for decades—but the constitutional commitment to make the algorithm public, auditable, and binding on operators. The queue can’t be overridden by a phone call from someone important. The queue can’t have hidden fast lanes for preferred partners. The queue is the enforcement mechanism, and if it can be bypassed, the whole edifice is performance.22
User-visible accounting becomes the dignity layer. You see your allocation, your usage, your position, your estimated start time. If you were preempted, you see your credits accumulating. If you waited longer than the published service level, you see the explanation and the compensation. This is not surveillance; it’s accountability in both directions. The system can prove it treated you fairly. You can prove it didn’t.23
In this world, the queue is not neutral—no queue ever is—but it’s predictably non-neutral in ways that everyone can see and contest. That might be the closest thing to fairness that infrastructure can achieve.
Instruments of Visibility
Imagine infrastructure that shows its work. Not quarterly reports buried in PDFs, but daily dials—the same few measurements, published in the same format, visible to anyone who cares to look.24
The access metrics tell you whether the system is usable or merely available: median wait time by class, preemption rate, abandonment rate (jobs canceled because the wait was too long). If the top decile of users consistently consumes ninety percent of capacity, the decay weights are broken. If new users wait hours for their first job during normal operation, the baseline allocation is too stingy.25
The fairness metrics expose whether allocations match reality: share versus usage by entity, new-user time-to-first-GPU, preemption credits issued and redeemed. These measurements make broken promises visible—not as moral failures but as engineering problems with engineering solutions.26
The sustainability metrics carry the resource cost into view: Power Usage Effectiveness, Water Usage Effectiveness, grams of CO₂ equivalent per kilowatt-hour, location-based and marginal emissions. These aren’t aspirations; they’re measurements that any data center can take and that third parties can audit.27 If these numbers don’t decline over time, you’re adding capacity faster than you’re cleaning it.
The reliability metrics show whether the system can be trusted: weekly uptime by class, error-budget burn rates, the gap between promised service levels and delivered performance.28
What’s radical is not the measurements themselves—most of these already exist in well-run technical systems—but the commitment to publish them daily, in plain view, where journalists and researchers and angry citizens can analyze trends and demand explanations. The fastest way to lose trust is to hide the queue. The fastest way to build it is to make hiding impossible.29
The Problem of Heat and Thirst
Data centers are furnaces wrapped in cooling systems. They consume roughly 1.5-2% of global electricity. Water consumption is less concentrated globally but regionally significant: data centers in Arizona, Ireland, Singapore, and Chile have all triggered local water-use controversies, infrastructure moratoria, and community conflict.10 A public compute option that ignores this becomes a climate liability dressed as a public good.
Imagine instead that capacity grows toward clean generation rather than forcing clean generation to chase capacity. New clusters site where there’s uncommitted renewable power and grid headroom, not where permits are easiest or land is cheapest. When a location is chosen, the rationale is public: this site has 50MW of uncommitted wind completing in Q3, and the local grid has winter headroom. The alternative sites had tighter grids and more fossil dependence.30
Imagine efficiency metrics—PUE, WUE, carbon intensity—audited by third parties with public instrumentation protocols and published as raw time-series data, not annual averages that hide the periods when everything breaks down.31 These aren’t aspirational commitments; they’re measurements that make broken promises visible before they compound.
Imagine heat reuse where geography permits. Data centers in Odense supply warmth to thousands of homes. Data centers in Phoenix could do the same in winter. The barrier is not technical—it’s that most siting decisions optimize for fiber and power cost, not for proximity to heat demand.32 A public option could choose differently because its objectives are different.
Imagine workloads labeled by flexibility—immediate, flexible within a day, very flexible, location-agnostic—so schedulers can move batch jobs to cleaner hours and regions without breaking latency promises. Carbon-aware scheduling already exists at hyperscale; the tools are proven. The question is whether public compute inherits this capability or leaves it as a luxury for those who can afford private clouds.33
Imagine water accounting as rigorous as energy accounting: WUE published alongside PUE, with basin context that explains what the same ratio means in Finland versus Arizona. Dry cooling in water-stressed watersheds even if it costs 10% more energy, with the trade-off explained plainly.34
The pattern is consistent: make the costs visible, make the trade-offs explicit, make the choices auditable. This doesn’t guarantee perfect decisions, but it prevents the worst ones from hiding behind opacity until it’s too late to correct.
The Durability Problem
Public infrastructure lives or dies on funding that doesn’t wobble. Pilot projects succeed, write glowing reports, then vanish when the grant ends. What would it mean to fund compute the way we fund libraries or roads—not as an experiment, but as a permanent commitment to capacity?35
The precedents exist: universal-service subsidies for telecommunications, public-broadcasting fees, spectrum-license revenues. Narrow levies on stable bases, independently governed, with sunset reviews to prevent permanent extraction. The E-Rate program has run for nearly three decades funding school and library connectivity through a small fee on telecom services.36 Its governance has flaws, and the underlying Universal Service Fund faces current legal challenges that could reshape its structure. But the core mechanism—a dedicated revenue stream insulated from annual appropriations battles—has proven more durable than discretionary funding.
The question is not “can this be funded” but “can this be funded in ways that survive budget fights and political turnover.” Appropriations are perpetually vulnerable. Dedicated revenue streams—if narrow, capped, and independently administered—can endure. So can bonded capital expenditure, like infrastructure debt backed by expected public value. The key is making operations automatic rather than discretionary, so the system doesn’t face an existential crisis every budget cycle.
Governance as Binding Constraint
What would it mean for compute access to be enforceable? Not a gift that can be withdrawn, but a legal minimum: N GPU-hours per person per month, institutional tiers with clear criteria, application processes measured in days not months.37
What would it mean for queues to be neutral by law? No hidden fast lanes, no pay-to-play, no preferential scheduling for entities that make donations or wield political influence. Neutrality as a binding constraint, not an aspiration.38
What would it mean to have a Compute Ombuds—someone with authority to inspect scheduler logs, require explanations, and publish findings? When a user waits sixty hours without explanation, the Ombuds reconstructs the decision: your job was deprioritized because your institution consumed 3x its fair share last week, or the scheduler has a bug and we’re issuing credits. Accountability visible to all parties.39
What would it mean for privacy to be built in, not bolted on? Strong isolation by default, short retention for routine logs, clear consent for telemetry, secure enclaves for sensitive work. Not promises about privacy; architecture that makes violations hard.[^privacy]
These aren’t novel ideas. They’re how we govern utilities that matter. The novelty would be applying them to compute before concentration becomes irreversible.
Failure Modes as Design Constraints
Every system fails. The question is what the failure looks like and whether you can see it coming.40
Energy rebound: Make compute cheap and people use more of it. Total emissions rise even as efficiency improves because demand scales faster than optimization. You meant to democratize access; you accelerated climate damage. The tripwire would be simple: grams of CO₂ per successful job rising while PUE holds flat.
Capture by heavy users: A few institutions consume most capacity. New users can’t get started. The baseline becomes a fiction. The tripwire: top decile consuming more than some threshold for weeks on end, new-user time-to-first-GPU stretching from minutes to hours to days.
Censorship by queue: Controversial research waits forever. No explanation given. Patterns emerge: certain topics, certain affiliations, mysteriously deprioritized. The tripwire: grievances clustering by subject matter, unexplained deferrals that can’t be justified by usage history.
Security failures: Someone trains malware, probes for vulnerabilities, attempts dual-use work that violates export controls. The tripwire: network anomalies, model-behavior flags, the gap between policy and enforcement.
Lock-in: Identity and APIs drift toward a single vendor’s proprietary stack. The federation fragments. Portability becomes theoretical. The tripwire: capacity increasingly bound to systems that can’t interoperate.
Regional neglect: Urban clusters thrive. Rural and remote users wait endlessly. The tripwire: sustained geographic disparities in wait times and success rates.
The pattern is consistent: name the failure mode, define what would make it visible, commit to watching. You won’t catch everything. What matters is being honest about the gaps and publishing what you learn when things break.
What This Enables (Not Promises—Enables)
A teacher assigns a model-based lab without begging a program officer for credits. The students submit jobs after school. The jobs run overnight. Everyone gets a result before class the next day. No one takes a late penalty for lacking access.41
A city runs a weekly climate-risk simulation with updated rainfall and temperature data. The code is open. The outputs are posted. Residents can see how each proposed infrastructure investment changes flood depth in their neighborhood. The council votes with numbers, not vibes.42
A clinic fine-tunes a triage model on patient records that never leave the building, using a secure enclave. The model works. The attending physician publishes the model card—architecture, training data characteristics, performance metrics, limitations. Eight other clinics in the county download the weights and adapt them. Sepsis detection improves by 20% across the network. No consulting contract. No proprietary lock-in. Just shared work on shared infrastructure.43
A co-op builds an open agent for local businesses—inventory management, customer communication, basic bookkeeping. They train it on public compute, publish the weights and training logs, and hand the whole thing to the next town. The second town adapts it. The third town improves it. Within a year, a dozen towns have functional agents, and none of them paid for expensive bespoke development.44
A student runs their first GPU job the day they hear the word “tensor,” not six months later after figuring out how to beg for credits. They make mistakes. The mistakes are cheap. They learn fast. Some of them will build things that matter. None of them will forget that compute was available when they needed it.45
None of this is splashy. All of it compounds. Small access, repeated, creates capability. Capability creates agency. Agency creates the future that wasn’t preordained by whoever had the most GPUs in 2025.
Closing: The Library Stays Open
Your grandmother’s public library didn’t ask if she could afford books. It didn’t ask if she’d use them wisely. It asked if she could read, and when she said no, it taught her.46
We built libraries when books were power. We built schools when knowledge was scarce. We funded connectivity when connection became a precondition for speech and work. Compute has crossed the same threshold. Calling it a right is not grandiose; it is descriptive. It says out loud what is already true: without compute, you are not invited to the conversation.47
The girl with thirteen minutes and a GPU she doesn’t have shouldn’t exist in a world that claims to care about equity. The attending physician with a life-saving model she can’t share shouldn’t exist in a world that claims to care about health. The city planner with the better simulation he can’t run shouldn’t exist in a world that claims to care about climate. These are solvable problems. The barriers are not technical. They are not even primarily financial. They are political failures dressed as resource constraints.
Set a floor. Share what we must. Measure what we value. Run the queue like it is the constitution. Because it is. The queue is where we decide who gets to participate in the next decade of technical work, and that decision will shape everything that follows—who learns, who builds, who adapts, who gets left behind.
The best part is predictable: if you make compute genuinely available, people will use it for things none of us can predict. They’ll train models we didn’t imagine on data we didn’t curate for purposes we didn’t anticipate. Some of it will be brilliant. Some of it will be strange. Some of it will fail in illuminating ways. All of it will be theirs, because the infrastructure was there when they needed it, and the queue let them through.
The library stays open. The lights stay on. The queue runs fair. This is not a moonshot. It’s a decision to treat compute like the civic infrastructure it already is, and to stop pretending that access is something you earn rather than something you need to begin.
Footnotes
-
Queue position is not a technical detail; it is a political fact. Who waits, how long, and why—these are policy choices encoded in scheduling algorithms. When we treat them as neutral defaults, we naturalize inequality. ↩
-
Clinical AI shows the governance gap starkly: the tools that could save lives are trapped by access bottlenecks (no compute for training), ownership confusion (who owns weights trained on borrowed credits?), and liability fear (what if sharing causes harm?). Public compute can’t solve all of this, but it can solve the first part. ↩
-
Civic institutions are often compute-poorest precisely when decisions are most consequential. A city voting on flood infrastructure without running the best available models is not choosing simplicity; it’s choosing ignorance under constraint. ↩
-
The shift from “luxury” to “ordinary necessity” happens quietly and then all at once. In 2010, cloud storage felt like magic. In 2025, not having it feels like deprivation. Compute is crossing that threshold now. ↩
-
Concentration is not inherently evil, but it is inherently fragile and unaccountable. When a handful of entities control most of the world’s accessible compute, small decisions by those entities have planetary consequences—and the entities answer primarily to shareholders, not publics. ↩
-
The precedents are not perfect—libraries struggle with funding, broadband subsidies are complex and sometimes captured, supercomputers serve narrow communities—but they prove the basic model works: shared infrastructure, neutral access, public funding, durable governance. ↩
-
The baseline will feel too small to some and too large to others. That’s fine. The point is to set a floor that’s legible and defensible, then adjust as norms shift. The worst outcome is no floor at all. Normalizing GPU-hours to a standard performance baseline (e.g., TFLOP·h at FP16) prevents the floor from becoming meaningless as hardware generations diverge—an H200 hour delivers far more compute than an L4 hour. ↩
-
“Standard software stacks that actually work” is doing a lot of lifting. Most public compute fails because setup is impossible. If accessing the right is harder than begging for commercial credits, the right is fake. ↩
-
Blunt limits are better than nuanced limits, because nuanced limits become discretionary enforcement, which becomes bias. Better to say “no malware, no CSAM, no commercial resale” and enforce it strictly than to enumerate edge cases that turn into loopholes. ↩
-
The resource footprint of computing is no longer deniable or deferrable. IEA estimates put data center electricity consumption at 415-460 TWh in 2022-2024 (~1.5-2% of global electricity), projected to exceed 1,000 TWh by 2026 as AI and crypto demand surge. Water impacts are regionally concentrated: Arizona, Ireland, Singapore, and Chile have all seen data center moratoria, policy constraints, or community conflicts over water use. Treating sustainability as optional is how public compute loses legitimacy before it starts. ↩ ↩2
-
PUE, WUE, ERE, CUE are established standards from The Green Grid, a consortium of industry and academic partners. They’re not perfect—PUE can be gamed by moving boundaries—but they’re measurable, auditable, and widely understood. Perfect is the enemy of good. ↩
-
“Fabric, not fortress” is the design principle. A fortress is defensible but isolated. A fabric is vulnerable but extensible. Public infrastructure should be connective tissue, not a walled garden. ↩
-
The Civic Cloud is the boring, essential core—stable, high-volume, always there. It won’t be the most powerful or the most efficient, but it will be predictable and neutral, which matters more for a public option. ↩
-
Federation is hard—competing standards, legacy systems, institutional inertia—but it’s also how the internet itself works. Email federates. The web federates. DNS federates. We know how to do this. ↩
-
Verifiable Credentials (W3C standard) let identity and authorization follow users across systems without centralizing control. Think of them as digital certificates that prove who you are and what you’re allowed to do, without requiring every system to trust a single authority. ↩
-
Edge grants solve the “public option as bottleneck” problem. If the Civic Cloud is full, you can use a voucher on AWS—but AWS has to honor the same scheduling and reporting rules. This keeps commercial clouds from fragmenting access while keeping public infrastructure from becoming a monopoly. ↩
-
ACCESS, EuroHPC, EOSC are proof that the coordination problem is solvable. What they lack is not technical sophistication but political mandate and funding to go beyond research and teaching. ↩
-
This is not hyperbole. The queue determines who can participate in technical work, which determines who shapes the next generation of tools, which determines whose needs get encoded in infrastructure. Queue design is constitutional design for compute. ↩
-
Preemption without credits is tyranny. Preemption with credits is emergency power with compensation. The difference matters. ↩
-
Carbon-aware scheduling already exists at hyperscale: Google and Microsoft shift flexible workloads across time and regions to match cleaner grid hours. The technology is proven; the question is whether public compute inherits it or treats sustainability as optional. Jobs labeled by flexibility (immediate/24h/1-week/location-flexible) let schedulers optimize for carbon intensity and water stress without breaking latency promises. ↩
-
If the queue can be overridden by wealth, influence, or back-channel negotiation, it’s not a constitutional mechanism—it’s theater. The binding quality is what makes it real. ↩
-
User-visible accounting makes the queue legible in both directions. The operator can prove fairness. The user can prove unfairness. Without this, disputes become “your word against the system,” and the system always wins. ↩
-
“The same few dials, daily, forever” is the discipline that keeps public systems honest. As soon as you stop publishing, you stop being accountable. ↩
-
Access metrics answer the basic question: can people actually use this, or is it just technically available? Median wait time, abandonment rate, and success rate tell the story of whether the infrastructure serves its purpose. ↩
-
Fairness metrics expose when the system’s stated allocation principles diverge from its actual behavior. They make broken promises visible before they become structural. ↩
-
Sustainability metrics prevent “green” from becoming marketing. If PUE, WUE, and carbon intensity don’t improve over time, you’re growing faster than you’re cleaning. ↩
-
Reliability metrics make trustworthiness measurable. Weekly uptime and error budgets force explicit trade-offs between moving fast and staying stable. ↩
-
Daily, machine-readable, stable URL. If you bury the metrics in a quarterly PDF, you’re not being transparent; you’re performing transparency. ↩
-
Siting decisions encode values. Growing capacity toward clean generation rather than forcing generation to chase capacity is a choice about what kind of infrastructure this is. ↩
-
Self-reported efficiency metrics aren’t audits. They’re marketing. Third-party verification with public instrumentation protocols would make the numbers meaningful. ↩
-
Heat reuse is plumbing, not science fiction. The barrier is that data centers are usually sited for power and fiber, not for proximity to heat demand. A public option could choose differently if proximity to communities were part of the objective. ↩
-
Carbon-aware scheduling shifts flexible work to cleaner grids and times. The technology exists; the question is whether public compute inherits these capabilities or treats them as optional. ↩
-
Water accounting is less mature than energy accounting, but the tools exist. WUE should be published with basin context—the same WUE means something very different in Finland versus Arizona. ↩
-
Pilot projects succeed, then vanish when grants end. Durable funding requires mechanisms that survive political transitions—narrow levies on stable bases, independently governed, with sunset reviews. ↩
-
E-Rate has governance problems—compliance burden is high, adaptation is slow, capture attempts are constant—but the core mechanism has survived multiple political transitions. As of 2025, the underlying Universal Service Fund faces Supreme Court review over its constitutional structure, which could reshape how universal-service programs are funded. Study both its successes and its vulnerabilities; durable funding requires legal foundations that can withstand challenge. ↩
-
Minimum guarantees in law would mean you could sue if they’re violated. That’s not a bug; it’s the point. Rights without enforcement are wishes. ↩
-
Compute neutrality is the direct analog of net neutrality: no discrimination based on who you are or who you know. Without it, public compute becomes a patronage system in machine-readable form. ↩
-
The Ombuds is the human-readable interface to algorithmic governance. When the queue doesn’t make sense, someone with authority can look inside and explain what happened. ↩
-
Every system fails. The question is whether you notice early and publish the failure, or notice late and hide it. ↩
-
Education is the highest-leverage use case. A student who learns on public compute doesn’t forget that access was available. Some of them will build the next decade’s infrastructure. ↩
-
Civic compute lets municipalities govern with numbers instead of vibes. The difference is legitimacy: residents can see the models and challenge the assumptions. ↩
-
Clinical AI on public compute avoids the ownership and lock-in traps that plague current healthcare AI. If the model is open and the infrastructure is neutral, the benefits compound across institutions. ↩
-
Cooperatives and small organizations are often capability-poor, not idea-poor. Give them compute and they’ll build tools that commercial vendors would never find profitable. ↩
-
First contact matters. A student whose first GPU job runs in an hour will experiment. A student who waits six months will give up. Small frictions compound into structural exclusion. ↩
-
The library is the moral center of the argument. We built them because access to knowledge mattered more than ability to pay. Compute is knowledge now. ↩
-
Calling compute a right is descriptive, not aspirational. It describes what’s already true: without it, you’re excluded from participation. Rights language just makes the exclusion legible and actionable. ↩