Strike a match. Watch it burn. The flame is consuming fuel, releasing energy, transforming matter. It’s warm, dynamic, self-sustaining, as long as the fuel lasts.
Now consider yourself. You’re also consuming fuel, releasing energy, transforming matter. You’re warm, dynamic, self-sustaining. The chemistry is remarkably similar: carbon compounds combining with oxygen, releasing energy, producing carbon dioxide and water.
That’s cellular respiration: glucose plus oxygen yields carbon dioxide, water, and energy.1 It’s the same reaction class as burning sugar in a flame. The difference is speed and control.
You are, in a very real sense, a fire. Your flame has been burning continuously since you were conceived. It will burn for decades if you’re lucky. And unlike a candle, it can copy itself.
Life isn’t made of special stuff. It’s made of maintained differences. How does ordinary chemistry pull this off?
The problem of staying lit
A candle flame is an open system. Energy and matter flow through it. It maintains a recognizable structure as long as inputs continue. But it has no memory, no way to repair itself, no capacity to rebuild if disrupted. Snuff it, and it’s gone.2
Life solved this problem. Living things are also open systems; energy and matter flow through them continuously. But they maintain complex internal structure. They repair damage. They copy themselves with high fidelity. They persist in ways flames cannot.
The fundamental question of biology isn’t “what are living things made of?” We know the answer: mostly carbon, hydrogen, oxygen, nitrogen, plus traces of sulfur, phosphorus, and metal ions.3 The same elements found in rocks and air and seawater. There’s no special life-stuff, no élan vital, no ingredient unique to the living.
How do ordinary chemicals get organized into something that maintains and copies itself?
The answer has two parts. First, energy. Second, information. This essay is about the first part, the thermodynamics of staying alive. The second part, information, is what makes the fire capable of copying itself, and it connects to how cells decide what to become.4
The thermodynamic puzzle
Here’s where it gets strange. The second law of thermodynamics says that entropy (roughly, disorder) increases in isolated systems.5 Things fall apart. Organization dissolves. Hot objects cool, cold objects warm, differences smooth out. The universe trends toward equilibrium, toward the gray uniformity of maximum entropy.
Life seems to violate this. Organisms are highly ordered. They build complexity from simplicity. They maintain improbable molecular arrangements against the tendency toward dissolution. A bacterium is statistically miraculous: for a genome of roughly a million bases, the space of possible sequences is 4^(10^6), so a particular sequence is approximately 10^(-600,000) in a naive uniform model.6 The number has no physical meaning. This tells us life didn’t arise by random assembly.
But life doesn’t violate thermodynamics. It exploits thermodynamics.
The key insight, developed by Schrödinger and formalized by Prigogine, is that the second law applies to isolated systems.7 Earth isn’t isolated. Energy pours in from the sun: high-energy photons, tightly packed in frequency. Earth radiates energy back to space as low-energy infrared, diffuse and spread out. The incoming energy has low entropy; the outgoing energy has high entropy. This creates a gradient, and gradients can do work.8
Life positions itself on this gradient like a waterwheel in a stream. The flow of energy from high to low concentration drives processes that would otherwise be thermodynamically forbidden. Local order gets purchased with global disorder. You’re not violating physics. You’re channeling it.
Free energy: the real currency
Energy alone isn’t enough. What matters is free energy, the energy available to do work.9
The Gibbs free energy change of a reaction, ΔG, tells you whether that reaction can happen spontaneously. If ΔG is negative, the reaction releases free energy and can proceed on its own. If ΔG is positive, the reaction requires free energy input; it won’t happen unless something else pays for it.
The equation says free energy depends on enthalpy (roughly, heat content), temperature, and entropy. A reaction can be thermodynamically favorable by releasing heat, by increasing entropy, or both. Life exploits all these possibilities.10
The trick life discovered is coupling: take an unfavorable reaction (positive ΔG) and link it to a favorable one (negative ΔG). If the combined ΔG is negative, the whole process proceeds, and the favorable reaction drags the unfavorable one along.11
This is how life builds order. Building a protein from amino acids is thermodynamically unfavorable; it decreases entropy. But cells couple protein synthesis to ATP hydrolysis, which is highly favorable. The net reaction proceeds because the ATP hydrolysis pays for the protein assembly. Order emerges because disorder increases somewhere else.
ATP: the coupling molecule
All known life uses adenosine triphosphate (ATP) as its primary coupling agent.12 ATP is life’s universal energy currency. When a cell needs to drive an unfavorable reaction, it couples that reaction to ATP hydrolysis:
In biochemical “standard” conditions, this reaction releases about 30 kJ/mol. But in living cells it’s often closer to 50-60 kJ/mol, because cells hold ATP concentrations high and ADP plus phosphate low.13 This is one of the ways life maintains itself far from equilibrium, and reaps the benefits of doing so.
How far from equilibrium? Cells maintain the phosphorylation ratio, [ATP]/([ADP][Pi]), many orders of magnitude away from what thermodynamic equilibrium would predict, on the order of 10^9.14 That represents an enormous amount of stored work, constantly maintained, constantly renewed.
A human body contains only tens of grams of ATP at any moment.15 Yet over a day it synthesizes and hydrolyzes roughly its body weight in ATP, tens of kilograms depending on activity level.16 Each ATP molecule gets recycled hundreds of times daily. The pool turns over every few minutes.
ATP isn’t storing energy for later. It’s a coupling agent that transfers energy from reactions that release it to reactions that require it. Less like a battery, more like a drive shaft: a mechanical linkage between energy source and energy sink.
Chemiosmosis: the proton turbines
The question becomes: how do cells regenerate all that ATP?
The answer, when scientists finally worked it out, was so strange that many refused to believe it. Peter Mitchell proposed the chemiosmotic hypothesis in 1961 and spent years defending it against skepticism before winning the Nobel Prize in 1978.17
Here’s how it works. Cells extract electrons from food molecules (glucose, fatty acids, amino acids) and pass them along a chain of protein complexes embedded in a membrane. As electrons move down the chain, each transfer releases a little energy.18
The complexes use that energy to pump hydrogen ions (protons) across the membrane, building up a concentration gradient. More protons on one side than the other. This gradient stores energy, like water behind a dam: a “protonmotive force” that wants to equilibrate.19
Then the protons flow back through a remarkable protein called ATP synthase. This enzyme is a literal rotary motor: the proton flow spins a central rotor, and the mechanical rotation drives conformational changes that force ADP and phosphate together into ATP.20
The structure has been solved in atomic detail. You can watch simulations of it spinning. It can rotate at tens to hundreds of revolutions per second, producing three ATP molecules per full rotation.21
You are powered by billions of molecular turbines, spinning in the inner membranes of your mitochondria, driven by proton gradients.
Those mitochondria descend from bacteria that took up residence in your cells’ ancestors roughly two billion years ago.22 Those ancient bacteria became permanent residents, keeping their own DNA and their own ribosomes, running their own electron transport chains. Your cells are chimeras, and your power plants are endosymbionts.
Not all life burns oxygen. Many microbes run on other electron acceptors, from sulfate to iron to carbon dioxide. But the underlying logic is the same: harvest a gradient, store work, stay far from equilibrium.23
Far from equilibrium
A dead body has the same chemicals as a living one. The same atoms, the same molecules, more or less the same arrangement, at least initially. What it lacks is the energy flow that maintains that arrangement against decay.24
This is the key insight of non-equilibrium thermodynamics: certain structures only exist because energy flows through them. Ilya Prigogine called them dissipative structures, patterns that maintain themselves by dissipating energy.25 A candle flame is a simple dissipative structure. A hurricane is a more complex one. Life is the most complex we know.
Equilibrium is death. At equilibrium, nothing happens. All gradients have smoothed out, all reactions have run to completion, all information has been erased by thermal noise. A living cell maintains itself far from equilibrium, in a state that would be wildly improbable if the system were isolated but becomes stable, even inevitable, when energy flows through.26
The metabolism running in your cells right now maintains millions of molecular gradients: ion concentrations, pH differences, redox potentials, phosphorylation states. Each gradient represents stored work, energy poised to do something. When you die, these gradients dissipate. The ATP runs out. The turbines stop spinning. The fire goes out.
The origin problem
If life requires metabolism, and metabolism requires the molecular machinery that only living cells possess, how did life begin?
This is the origin-of-life problem, and it remains unsolved. The challenge is that modern metabolism is a chicken-and-egg tangle: proteins catalyze the reactions that build proteins; DNA encodes the instructions for enzymes that copy DNA; ATP powers the synthesis of ATP synthase.27
Something simpler must have come first. The leading hypothesis is that RNA played both roles initially, storing information (like DNA) and catalyzing reactions (like proteins). This “RNA world” would have preceded the DNA-protein world we know.28
But even RNA needs to be synthesized, and that requires energy and catalysis. Recent work has focused on geochemical settings that might provide both: alkaline hydrothermal vents on the ocean floor, where proton gradients across mineral membranes could have driven early chemistry; volcanic hot springs where wet-dry cycles concentrate reactants.29 The details remain speculative, but the framework is thermodynamic. Life began where energy gradients met chemistry capable of channeling them.
What we know for certain is that it happened at least once, more than 3.5 billion years ago (and possibly earlier), and never completely stopped.30 All life on Earth shares an almost universal genetic code, strong homochirality, the same core metabolic pathways. We descend from a single origin, one fire that caught and kept burning.
The fire that copies itself
Metabolism explains how life stays lit. But life does something stranger still: it copies itself.
A flame can spread (if it touches new fuel, a new flame ignites) but it can’t replicate its particular pattern. It has no information to pass on, no template for its own structure.
Life has information: DNA. The sequences encode instructions for building the proteins that run metabolism. When a cell divides, it copies its DNA, passing the instructions to both daughter cells. This is what metabolism is for: the energy flowing through the system is harnessed to copy information.
The fire burns in order to replicate the instructions for its own burning.31
This closes a strange loop. Metabolism requires proteins. Proteins are built according to DNA instructions. DNA is copied by proteins powered by metabolism. Each part requires the others. The whole system is a self-referential knot, an autocatalytic set that somehow got tied in the first place and has been copying itself ever since.32
The result is what we call life: a dissipative structure that maintains itself far from equilibrium, channels energy gradients into molecular order, stores the instructions for its own construction, and copies those instructions with enough fidelity to persist across billions of years.
Fire and freedom
Fire transforms. It takes ordered fuel and disperses it into heat and gas. It’s entropy actualized, disorder spreading.
Life transforms in the other direction. It takes disordered matter and energy and builds order: structure, complexity, information. Entropy decreases locally because it increases globally. The sun burns so that you can think.
This feels like freedom, or something approaching it. Life doesn’t merely obey physics; it exploits physics to do things physics wouldn’t otherwise do. Molecules that catalyze their own production. Structures that repair themselves. Patterns that persist across geological time and spread to fill every available niche.
The cells in your body run the same core metabolism that bacteria ran three billion years ago. The fire has been continuous. It flickered through mass extinctions, crossed from water to land, adapted to ice ages and hothouse periods, but never went out. Every living thing you see is that same fire, still burning, shaped into ten million forms by natural selection but chemically recognizable as kin.33
You are made of atoms forged in dying stars, organized according to instructions nearly as old as the Earth, burning fuel to maintain the improbable order that constitutes your body and mind. The flame shapes itself, builds walls to contain itself, copies the instructions for its own burning.
We are the fire that learned to be careful, and then learned to wonder what fire is.
Footnotes
-
Cellular respiration is more complex than this summary equation suggests. Glucose is broken down in stages (glycolysis, the citric acid cycle, oxidative phosphorylation), each with multiple steps. The summary equation captures the net transformation but obscures the mechanism. ↩
-
In thermodynamics, an open system exchanges both energy and matter with its surroundings, a closed system exchanges only energy, and an isolated system exchanges neither. Both flames and organisms are open systems, but organisms maintain far more internal structure. ↩
-
By mass, a human body is roughly 65% oxygen, 18% carbon, 10% hydrogen, 3% nitrogen, plus smaller amounts of calcium, phosphorus, potassium, sulfur, sodium, chlorine, and trace elements. By atom count, hydrogen dominates because it’s so light. ↩
-
This connects to “How Cells Decide,” which explores how cellular decision-making emerges from distributed molecular processes. Metabolism provides the energy; without ATP, there’s no signal transduction, no gene expression, no cellular choice. ↩
-
Entropy is a slippery concept. In thermodynamics, it measures energy dispersal. In statistical mechanics, it measures the number of microstates consistent with a macrostate. In information theory, it measures uncertainty. These definitions are related but not identical. I’m using the thermodynamic sense here. ↩
-
The probability calculation illustrates why random assembly isn’t the right model for life’s origin. For a genome of ~10^6 bases, the space is 4^(10^6) possible sequences, so a particular sequence has probability ~10^(-600,000) under uniform random sampling. Life arose by selection operating on variation, not by chance assembly. ↩
-
Schrödinger’s What Is Life? (1944) posed the question of how organisms maintain order against entropy. His answer, “negative entropy” or “negentropy,” anticipated later developments in non-equilibrium thermodynamics. The book influenced a generation of physicists who became molecular biologists. ↩
-
The sun’s surface is about 5,800 K; space is about 2.7 K (the cosmic microwave background). Earth sits between them, receiving high-quality (low-entropy) energy and radiating low-quality (high-entropy) energy. This gradient drives essentially all life on Earth, with minor contributions from geothermal and radioactive sources. ↩
-
“Free energy” doesn’t mean energy without cost. It means energy free to do work: the portion of total energy that can drive processes, as opposed to the portion locked up in thermal equilibrium. Gibbs free energy is most relevant for biological systems at constant temperature and pressure. ↩
-
The equation ΔG = ΔH - TΔS is a cornerstone of biochemistry. At body temperature (310 K), the -TΔS term means entropy changes matter significantly. Many biological processes are “entropy-driven,” thermodynamically favorable because they increase disorder even if they absorb heat. ↩
-
Coupling is achieved mechanistically through shared intermediates. When ATP hydrolysis is coupled to protein synthesis, the reactions share a common intermediate (an aminoacyl-tRNA complex). The favorable reaction doesn’t just “pay for” the unfavorable one; they’re physically linked through molecular machinery. ↩
-
ATP’s universality is remarkable. Every known organism uses it, which strongly suggests it was present in the last universal common ancestor (LUCA). Why ATP rather than some other nucleotide triphosphate? Possibly historical accident, possibly chemical advantages we don’t fully understand. ↩
-
The standard free energy of ATP hydrolysis is about -30.5 kJ/mol, but under cellular conditions the actual ΔG ranges from -50 to -65 kJ/mol. This depends on the concentrations of ATP, ADP, and phosphate, which cells regulate tightly. ↩
-
The phosphorylation ratio [ATP]/([ADP][Pi]) is a key measure of how far cells are from equilibrium. Cells maintain this ratio ~10^9 away from the equilibrium value, representing enormous stored work and explaining why ATP hydrolysis releases so much more energy in vivo than standard conditions suggest. ↩
-
Estimates vary, but a typical figure is 50-100 grams of ATP in the body at any moment. The key point is that this is a tiny pool relative to daily turnover; ATP is a high-flux intermediate, not a storage molecule. ↩
-
The “body weight per day” figure is widely cited in biochemistry textbooks and should be treated as an order-of-magnitude estimate. It depends on metabolic rate, which varies with activity level, body size, and other factors. ↩
-
Peter Mitchell’s chemiosmotic hypothesis was initially rejected by most biochemists, who expected ATP synthesis to work through direct chemical intermediates. Mitchell’s experimental evidence accumulated through the 1960s and 1970s. The Nobel Prize in 1978 marked the field’s acceptance. ↩
-
The electron transport chain consists of four major complexes (I-IV) in mitochondria, plus mobile carriers (ubiquinone, cytochrome c). Electrons flow from NADH or FADH₂ to oxygen, which is reduced to water. The oxygen you breathe is the final electron acceptor; without it, the chain stops and ATP synthesis halts. ↩
-
The protonmotive force has two components: a concentration gradient (ΔpH) and an electrical potential (ΔΨ). In mitochondria, ΔΨ contributes more. The total protonmotive force is about 200 mV, substantial at molecular scales. ↩
-
ATP synthase (F1F0-ATPase) is one of the most studied enzymes. The F0 portion is embedded in the membrane and conducts protons. The F1 portion, protruding into the matrix, contains the catalytic sites. The mechanism was elucidated by Paul Boyer and John Walker, who shared the 1997 Nobel Prize. ↩
-
The rotation of ATP synthase has been directly observed using attached fluorescent actin filaments. It rotates counterclockwise (viewed from the membrane) during ATP synthesis, clockwise during ATP hydrolysis. The rotation is stepped, with discrete ~120-degree substeps corresponding to catalytic events. ↩
-
The endosymbiotic origin of mitochondria is well established. Mitochondrial DNA, ribosomes, and membrane structure all resemble those of alpha-proteobacteria. The endosymbiosis likely occurred 1.5-2 billion years ago. A similar event gave rise to chloroplasts from cyanobacteria. ↩
-
Anaerobic metabolism predates aerobic metabolism evolutionarily. Many archaea and bacteria use electron acceptors other than oxygen: sulfate, nitrate, ferric iron, carbon dioxide, and others. Some metabolisms don’t use electron transport at all, relying solely on substrate-level phosphorylation. The diversity is remarkable; the underlying thermodynamic logic is constant. ↩
-
Death is not a single event but a cascade of failures. When oxygen supply stops, the electron transport chain halts, ATP synthesis ceases, ion gradients dissipate, membranes lose integrity, and enzymatic processes become unregulated. ↩
-
Ilya Prigogine received the 1977 Nobel Prize in Chemistry for his work on dissipative structures and non-equilibrium thermodynamics. His key insight was that far-from-equilibrium systems can spontaneously generate order when energy flows through them. The order is maintained by the dissipation, not despite it. ↩
-
Living cells are extraordinarily far from equilibrium across multiple dimensions: ion gradients, redox potentials, metabolite concentrations, phosphorylation states. Each represents stored work. Any way you measure it, the answer is “very far.” ↩
-
The chicken-and-egg problem applies at multiple levels. DNA needs proteins to replicate, proteins need DNA to encode them, and both need ribosomes, which are made of RNA and protein encoded by DNA and synthesized by ribosomes. ↩
-
The RNA world hypothesis proposes that RNA-based life preceded DNA/protein life. RNA can both store genetic information and catalyze reactions (ribozymes). The ribosome’s catalytic core is RNA, suggesting a vestige of this earlier world. However, the spontaneous origin of RNA itself remains problematic. ↩
-
Alkaline hydrothermal vents (like Lost City in the Atlantic) provide sustained proton gradients across mineral membranes, potentially driving early metabolism. Volcanic hot springs provide wet-dry cycles that concentrate and polymerize organic molecules. Both settings are actively studied as candidate locations for life’s origin. ↩
-
LUCA, the Last Universal Common Ancestor, was not the first life but the ancestor of all surviving lineages. Phylogenetic reconstruction suggests LUCA was already complex, with a full genetic code and sophisticated metabolism, implying significant evolution before it. Evidence for life extends back at least 3.5 billion years, with some contested claims reaching earlier. ↩
-
“What metabolism is for” is intentional language that some biologists dislike. Teleological framing can mislead. But in the context of evolution, it’s defensible shorthand: metabolism exists because organisms that metabolize outcompete those that don’t. The “purpose” is backward-looking (selected for) not forward-looking (designed for). ↩
-
An autocatalytic set is a collection of molecules that collectively catalyze their own production from available precursors. The concept, developed by Stuart Kauffman and others, provides a theoretical framework for understanding how self-sustaining chemistry might emerge without any single “replicator.” ↩
-
Metabolic continuity is literal at the level of lineage and machinery: every cell comes from a cell whose energy-conservation apparatus never “restarted from scratch.” The molecules themselves are constantly rebuilt (your ATP is not descended from ancestral ATP as persistent entities) but the fire is continuous as a process. The pattern persists; the matter flows through. ↩