How Cells Decide

November 8, 2025 · 19 min read

You cut your hand slicing bread this morning. Nothing serious. You rinsed it, pressed a paper towel to the wound, and forgot about it. But under the skin, a conversation started.

Platelets arrived first, clotting the breach. They released signaling molecules that summoned white blood cells. The white blood cells released their own signals, calling fibroblasts to begin rebuilding the extracellular matrix. Epithelial cells at the wound’s edge began migrating, then dividing, crawling across the provisional scaffold, knitting the gap closed.1 As inflammation resolved and the tissue re-epithelialized, pro-growth cues declined and contact-dependent brakes kicked in.2

No one coordinated this. No central office dispatched the platelets or told the fibroblasts when to stop weaving. Each cell responded to its local environment: the molecules it sensed, the neighbors it touched, the mechanical tension in the tissue around it. The repair emerged from a thousand local decisions, none of which contained the plan.3

A week from now, the cut will likely be gone.4 You will not have thought about it once. But your cells thought about it constantly. They computed, communicated, decided, and stopped when the job was done.

If this counts as intelligence at all, it is intelligence made of relationships, not of a central controller. The question is whether that framing holds up, and what it means if it does.

The cognitive loop

Consider a bacterium swimming through murky water.5

It needs to find food and avoid toxins. This seems like a simple problem until you list what it requires. The bacterium must sense the chemical concentration around it. It must remember what the concentration was a moment ago. It must compare past and present. It must decide whether conditions are improving or deteriorating. And it must act on that decision, either continuing in its current direction or tumbling randomly to try a new one.

Sense, remember, compare, decide, act. That sequence is a cognitive loop. It is happening in an organism with no neurons, no synapses, no brain. At roughly two microns long, the bacterium is only a few wavelengths of visible light across.6

The mechanism is chemistry. Receptors on the bacterium’s surface bind to molecules in the water. That binding triggers a cascade of protein modifications inside the cell. The cascade affects proteins that control the flagellar motor, the little propeller that moves the bacterium through liquid. Crucially, these modifications happen on a timescale that creates memory: the cell’s internal state reflects not just current conditions but recent history. In control theory terms, it behaves like an integral feedback system, resetting its baseline so the cell stays sensitive to changes across a wide dynamic range.7

What’s happening here is computation. Not metaphorically. The bacterium is processing information, storing state, and executing biased switching: fewer tumbles when things improve, more when they worsen. The implementation is protein interactions rather than silicon transistors. But the logic is recognizable: state, comparison, conditional action.8

The standard story says intelligence requires a brain. Neurons are the hardware of thought; more neurons, more intelligence. This story has a problem. The bacterium, by any functional measure, is making decisions. It has zero neurons. I am using cognitive language here as a functional description, not a claim about consciousness.9 But if the function fits, either we need a different definition of decision-making, or we need a different theory of what makes it possible.

Becoming

The deeper puzzle is not that cells compute. It is that they decide what to become.

You began as a single fertilized egg. That egg divided, and divided again, and eventually became the tens of trillions of cells that compose your body.10 Every one of those cells contains the same DNA, the same instruction manual. Yet they became radically different things: neurons that fire electrical signals, muscle fibers that contract, red blood cells that carry oxygen, the transparent crystallin-packed cells of your eye’s lens.11

Same instructions. Different outcomes. How does a cell know which outcome to choose?

The answer is context. Cells do not decide in isolation. They decide based on where they are, who their neighbors are, and what signals those neighbors are sending. Development is a conversation, not a blueprint.12

Picture an early embryo. A cell somewhere in that cluster is receiving signals from its neighbors: molecules that bind to receptors on its surface and activate specific genes inside. Those newly active genes change what the cell produces, including what signals it sends back to its neighbors. The neighbors respond by adjusting their own gene expression. The whole system evolves together, each cell’s fate shaped by its position in the network of conversations.13

What emerges is distributed decision-making. No single cell contains the master plan. The plan is not located anywhere. It emerges from the interactions. The intelligence is relational.14

Landscapes and attractors

Inside each cell, genes form networks. Gene A produces a protein that activates Gene B and suppresses Gene C. Gene B’s product activates Gene D. Gene D’s product suppresses Gene A. Thousands of genes interlock in loops of activation and repression.15

These networks have stable states. Certain combinations of active and inactive genes reinforce themselves: the active genes keep each other on, the inactive genes keep each other off. A cell “decides” to become a heart cell when its gene network settles into the heart-cell configuration. Once there, it tends to stay, often for years or a lifetime, because feedback stabilizes the state. But strong perturbations can still push cells across ridges into other basins.16

The biologist Conrad Waddington visualized this as a landscape.17 Imagine a ball rolling down a hillside carved with valleys. The ball can roll into any valley, but once it settles at the bottom, it tends to stay. The valleys are the stable cell types: neuron, muscle, skin. The ridges between them are the barriers that prevent a cell from switching identities. The ball’s path down the landscape is development.

It is a metaphor with real mathematics behind it: gene-regulatory dynamics can be analyzed as flows through a high-dimensional state space, where cell types correspond to attractors. In many cases we can even construct an effective “landscape” from the dynamics, though it comes with important caveats about when such a global potential function actually exists.18

The developmental biologist is mapping these basins. Some are shallow: easy to enter, easy to leave. Some are deep: once a cell falls in, it almost never escapes. Cancer, in this view, is a cell finding a new attractor that evolution never intended, a valley that should not exist.19

The wisdom of the swarm

Individual cells impress. Collectives astonish.

Many bacteria practice quorum sensing.20 They continuously release signaling molecules into their environment. When the concentration of these molecules crosses a threshold, indicating that enough bacteria are nearby, the entire population switches behavior simultaneously. They might activate virulence genes, form a biofilm, or begin producing light.21

In effect, this is collective decision-making. Each bacterium is asking: Are there enough of us? No single bacterium knows the total count. They each respond to the local concentration of the shared signal. The population-level answer emerges from individual responses to a common medium. The decision is made by no one and everyone.22

The slime mold Physarum polycephalum goes further.23 Despite being a single cell, it can grow to cover several square feet, a pulsing yellow network spreading across a forest floor or a laboratory petri dish. When searching for food, it extends tendrils in all directions. When it finds food sources, it reinforces the paths between them and withdraws from dead ends. The result is a network that efficiently connects all the food.

Researchers placed oat flakes on a map of the Tokyo metropolitan area, each flake representing a major city. The slime mold, starting from “Tokyo,” grew a network connecting the flakes. The resulting network was strikingly similar in layout, and comparable to the Tokyo-area rail system on measures like efficiency and fault tolerance.24 It had solved, through local growth rules, an optimization problem that human engineers solved with computers and deliberation.

No central planner. No global map. Just: reinforce what works, retract what doesn’t. From this simple rule, near-optimality emerges.25

What the hardware theory misses

The standard theory of intelligence is hardware-centric. You need neurons. You need a brain. The brain is the seat of cognition; the body is its vehicle. Complexity of thought scales with complexity of neural architecture.26

This theory has explained much. It fits the broad pattern that animals with larger and more complex brains tend to exhibit more sophisticated behaviors. It helped link psychiatry to neuroscience. It gave us the metaphor of the brain as computer, which, whatever its limitations, has been enormously productive.27

But cells complicate the story.

Here we have sensing, memory, comparison, decision, and action, all implemented in molecular networks with zero neurons. We have developmental programs that reliably construct complex organisms through distributed computation. We have collectives that solve optimization problems human engineers struggle with.

None of this denies that neurons are extraordinary. Fast, energy-efficient electrical signaling is a powerful specialization. But specialization is not a prerequisite for feedback-driven decision-making. Cells were making decisions for billions of years before neurons existed.28

The key ingredients are not specific molecules or cell types. They are relationships. Sensing: components that respond to their environment. Memory: persistence of state over time. Communication: information flow between components. Feedback: responses that influence future behavior. Stability: patterns that maintain themselves once established.29

Assemble these in the right configuration and you get something that looks like decision-making. Whether you call it intelligence is a semantic choice. Functionally, it is hard to distinguish from cognition.30

Intelligence is organizational, not material. It is about how parts relate, not what parts are made of. Neurons are good at certain kinds of processing, but they are not the only substrate for adaptive response. They are one solution among many.

The democracy of the body

Your body is not a dictatorship with the brain as absolute ruler. It is something more like a democracy: a distributed system where countless local decisions aggregate into coherent action.

Your gut microbiome can influence brain function through immune, endocrine, and neural pathways, though the strength and direction of effects vary across contexts and individuals.31 Your immune system makes sophisticated judgments about what counts as self and what counts as foreign, a distinction more nuanced than most philosophy.32 Your heart contains an intrinsic network of neurons that helps regulate and coordinate cardiac function, though the basic beat is generated by pacemaker cells in the sinoatrial node, even without brain input.33

The brain is important. But it is less a commander than a coordinator. One voice in a chorus of cellular conversations. A particularly loud voice, perhaps. But not the only one speaking.34

This has implications that ripple outward. If cognition is distributed, where does the self reside? If cells make decisions, are they morally considerable? If intelligence is organizational, how should we think about the boundaries of mind?35

I find these questions more interesting than the old debate about whether machines can think. Cells make it hard to deny that many cognitive functions don’t require brains. The question is what kinds of organization produce what kinds of thought. And what we owe to the systems that think in ways we are only beginning to understand.

The patience of biology

One more lesson from cells: the power of patience.

Biological systems do not optimize for speed. They optimize for robustness, adaptability, and long-term survival. Evolution takes millions of years. Development takes months. Wound healing takes weeks. By engineering standards, these timescales are glacial.36

But the results are remarkable. Self-repairing systems. Adaptive immunity that remembers threats for decades. Bodies that maintain themselves for a century. These achievements exist not despite the slowness but because of it. The long timescales allow for error correction, redundancy, graceful degradation. The kind of deep reliability that fast systems cannot achieve.37

We build systems that optimize for speed and then wonder why they are fragile. Cells optimize for persistence and achieve both reliability and adaptability. There is a lesson here about complex systems of any kind. Sometimes the fast answer is the wrong answer. Sometimes the distributed solution outperforms the centralized one. Sometimes intelligence is not in the individual but in the interaction.

Cells figured this out three billion years ago. By the time you remember the cut on your hand, the democracy beneath your skin will have already adjourned.

Footnotes

  1. In skin wounds, migrating epithelial cells interact with a provisional extracellular matrix rich in fibrin and fibronectin; collagen deposition becomes prominent later during granulation and remodeling.

  2. Contact inhibition is one of several mechanisms that stop proliferation. When healthy cells touch, junction signaling triggers pathways (including the Hippo-YAP cascade) that halt the cell cycle. But the “stop” is multifactorial: declining growth-factor gradients, mechanical tension from ECM remodeling, and immune-to-repair transitions all contribute. Cancer cells often lose contact inhibition; they keep dividing even when surrounded by neighbors.

  3. Emergence is often invoked loosely. Here I mean it technically: the wound-healing behavior cannot be predicted from the properties of individual cells in isolation. It arises from their interactions. The whole has properties the parts do not.

  4. A week is typical for a small, superficial cut. Deeper wounds can remodel for months and may leave scar tissue. The basic sequence is the same; the timeline scales with damage.

  5. Escherichia coli is the best-studied example. Its chemotaxis system, worked out in exquisite molecular detail by Howard Berg and others, remains the canonical model of cellular decision-making.

  6. A micron is one-millionth of a meter. E. coli is roughly 2 µm long and 0.5-1 µm wide. Visible light wavelengths run 0.4-0.7 µm, so the bacterium is a few wavelengths across, an organism you could never resolve without a microscope.

  7. The molecular mechanism is called adaptation. Methyl groups are added to receptors over time, resetting the baseline. In control theory terms, this is integral feedback: the system integrates deviation from a setpoint and adjusts accordingly. The result is robust, near-perfect adaptation across several orders of magnitude of background concentration. See Barkai & Leibler (1997), “Robustness in simple biochemical networks,” Nature, for the robustness framework, and Yi et al. (2000), “Robust perfect adaptation in bacterial chemotaxis through integral feedback control,” PNAS, for the control theory interpretation.

  8. This claim is not uncontroversial. Some philosophers argue that computation requires representation, which requires semantics, which requires… and so on. I am using “computation” in the engineering sense: input-output transformation according to rules. By that standard, the bacterium computes.

  9. I am describing what the cell does, not what it experiences. Whether bacteria have anything like subjective experience is a separate question I am not equipped to answer. The functional description holds regardless.

  10. Estimates vary. A widely cited revision (Sender et al., 2016) suggests roughly 30 trillion human cells and 38 trillion bacteria in a “reference man.” You are, by cell count, about half bacteria.

  11. Lens cells are remarkable. They lose their nuclei and organelles during development, becoming transparent bags of crystallin proteins. They are alive but contain almost no internal structure. They cannot divide or repair themselves, which is why cataracts are irreversible without surgery.

  12. The concept of positional information was developed by Lewis Wolpert. A cell knows “where it is” in the embryo through gradients of signaling molecules. Different concentrations activate different genes. This is how a stripe of cells becomes a limb: they read their position in a morphogen gradient.

  13. Morphogens are signaling molecules that form concentration gradients across developing tissues. Classic examples include Sonic hedgehog (Shh), Bone morphogenetic proteins (BMPs), and Wnt. The names are colorful; the science is precise.

  14. This is why “genetic determinism” misses the point. The genome does not contain a blueprint for an organism. It contains rules for responding to context. The organism emerges from genes interacting with environment and with each other. Identical twins diverge because their contexts diverge.

  15. Gene regulatory networks (GRNs) are now mapped in detail for several organisms, especially sea urchins (Eric Davidson’s work) and Drosophila. They are not simple circuits. They have nested feedback loops, redundancy, and modularity. They look more like engineered control systems than random tangles.

  16. Cell memory without DNA mutation is called epigenetics. It involves chemical modifications to DNA and its associated proteins (histones) that affect which genes are accessible. These modifications can be inherited through cell division. Reprogramming (as in induced pluripotent stem cells) and transdifferentiation show that cell fates, while stable, are not absolute. Strong perturbations can push cells across ridges into other attractors.

  17. Conrad Waddington introduced the “epigenetic landscape” metaphor in 1957. It has been both enormously influential and frequently misused. Modern developmental biology has formalized it: the landscape corresponds to a dynamical system’s state space, and the valleys are stable attractors. But a true global potential function that fully determines dynamics exists only in special cases; more general treatments emphasize stochastic effects and “quasi-potential” approximations.

  18. Technically, a cell type is an attractor of the gene regulatory network dynamics. Stuart Kauffman pioneered this view in the 1960s. He proposed a provocative heuristic: that the number of cell types scales roughly as the square root of the number of genes. The quantitative claim is debated, but the attractor framework has proven durable.

  19. The attractor view of cancer was developed by Sui Huang and others. It suggests that cancer is not merely genetic damage but a stable alternative state the system can fall into. This has therapeutic implications: you might be able to push cells out of the cancer attractor without killing them.

  20. Quorum sensing was discovered in the bioluminescent bacterium Vibrio fischeri, which lives in the light organs of certain squid. The bacteria only produce light when present in high concentrations. The squid uses the light for counter-illumination camouflage.

  21. The light is produced by the enzyme luciferase acting on the substrate luciferin. The genes are only expressed above a threshold concentration of signaling molecule. This threshold corresponds to a “quorum” of bacteria.

  22. This is sometimes called “swarm intelligence.” The term is evocative but can mislead. There is no collective mind. There are only individuals responding to local information in ways that happen to produce globally adaptive behavior.

  23. Physarum polycephalum is technically a “plasmodial slime mold”: a single cell with many nuclei. It can grow indefinitely, which is why it reaches macroscopic sizes. When conditions deteriorate, it can form spores and wait.

  24. The Tokyo rail experiment was published in Science in 2010 by Tero et al. The slime mold network was not identical to the rail system but was strikingly similar and comparable in efficiency and fault tolerance. Engineers have since used slime-mold-inspired algorithms for network design.

  25. Stigmergy: coordination through environmental modification. Ants lay pheromone trails; later ants follow the strongest trails. No ant knows the global path. The path emerges from accumulated local decisions. The slime mold works similarly, reinforcing high-flow channels.

  26. The tendency to assume neurons are necessary and sufficient for cognition is sometimes called “neurocentric” thinking. It is understandable, given how much neuroscience has taught us. But it can blind us to cognition-like processes in other substrates.

  27. The brain-as-computer metaphor has been productive but limiting. It encourages us to think of neurons as processors and synapses as connections. But neurons are cells first, with all the decision-making machinery I have described. Computation is embedded in cellular biology all the way down.

  28. Microbial life is evidenced in rocks at least 3.5 billion years old (for example, the Dresser Formation hot-spring deposits dated to roughly 3.48 Ga). Neurons and nervous systems likely emerged during early animal evolution around the Ediacaran, perhaps 635-540 million years ago, though the details of early animal phylogeny make any single date approximate. Cells were making feedback-driven decisions for billions of years before neurons existed.

  29. These ingredients are not independent. Memory requires stability (some states persist). Communication requires sensing (receiving signals). Feedback couples sensing to action. The ingredients form their own network of dependencies.

  30. Functionalism has critics. Some argue it cannot account for consciousness, qualia, subjective experience. These are genuine puzzles. But they apply equally to neurons. If we do not fully understand how neurons generate experience, we cannot use neurons as the sole criterion for cognition.

  31. The gut-brain axis is now a major research area. Gut bacteria produce neurotransmitter precursors, short-chain fatty acids, and other molecules that can affect brain function. The vagus nerve carries signals bidirectionally. Causal evidence in humans is still accumulating, and effects vary by individual and context.

  32. The immune system distinguishes self from non-self through molecular recognition. T cells learn in the thymus to ignore the body’s own proteins. Autoimmune diseases occur when this education fails. The system must balance tolerance (ignoring self) with vigilance (attacking threats).

  33. The heart’s intrinsic nervous system (sometimes called the “little brain” or intracardiac nervous system) contains on the order of ten thousand neurons; anatomical estimates in humans suggest more than 14,000 intrinsic cardiac neurons, and counts vary by method and definition. It modulates heart function, but the basic rhythm is myogenic: pacemaker cells in the sinoatrial node spontaneously depolarize without needing neural input. That is why a transplanted heart, initially denervated, still beats.

  34. The brain as coordinator rather than controller is central to embodied cognition. The brain does not micromanage movement; it sets goals and lets the body figure out the details. Much of motor control is handled by spinal circuits and peripheral feedback loops.

  35. The extended mind thesis (Andy Clark, David Chalmers) argues that cognition can extend beyond the brain, into notebooks, smartphones, and other external resources. If cells outside the brain contribute to thought, the boundary of mind is even more porous than this thesis suggests.

  36. Engineering operates on timescales of months to years. Evolution operates on timescales of millennia to millions of years. This is not inefficiency; it is the price of open-ended exploration. Faster search covers less space.

  37. Biological robustness comes from redundancy (multiple copies, multiple pathways), modularity (failures are contained), and degeneracy (different components can perform similar functions). These principles are now being applied to engineering complex systems, including AI.