Zoom out far enough from the brain and the question of consciousness meets the question of the universe itself. Why does a cosmos governed by entropy produce beings that experience? Why do the laws of physics appear fine-tuned for the emergence of minds? And what do "meaningful coincidences" tell us about the relationship between inner and outer worlds?
The second law of thermodynamics states that the entropy of a closed system tends to increase over time. Entropy — roughly, disorder — defines the arrow of time: the reason the past is different from the future, the reason eggs break but don't unbreak.
Consciousness is intimately bound to this arrow. We remember the past and anticipate the future. We experience time as flowing in one direction. This "psychological arrow of time" is not a feature of the laws of physics (which are mostly time-symmetric) — it is a feature of entropy.
Memory itself is a low-entropy structure: your brain correlates its internal states with past events in the external world. This correlation is thermodynamically irreversible. You can form a memory (increase brain-world correlation) but you cannot unform one without erasing information — and erasing information, as physicist Rolf Landauer proved, produces heat. Memory has a thermodynamic cost.
In 2022, researchers from the Human Brain Project found that the second law of thermodynamics provides precise signatures of different brain states:
The pattern is striking: more consciousness = more entropy production. The conscious brain is a dissipative structure — it maintains its complex organization by continuously consuming energy and producing entropy, like a flame or a hurricane. Consciousness may require being far from thermodynamic equilibrium.
Some theorists propose that consciousness is not just correlated with entropy production but is fundamentally about managing entropy:
In 1948, mathematician Claude Shannon published "A Mathematical Theory of Communication" and, almost accidentally, revealed that information and physics share the same deep structure.
Shannon needed to measure information — not meaning, but the surprise of a message. His measure, which he called entropy (at the suggestion of John von Neumann), quantified how unpredictable a signal is:
The formula Shannon derived was identical to Boltzmann's formula for thermodynamic entropy from the 1870s. This was not a metaphor or a loose analogy — the mathematics were the same. Shannon's information entropy and Boltzmann's thermodynamic entropy are two expressions of the same underlying quantity.
This identity has profound implications:
Information entropy connects to every major theory:
Physicist Jacob Bekenstein discovered in the 1970s that a black hole's entropy — and therefore its information content — is proportional not to its volume but to its surface area. This led to the holographic principle: the maximum information in any region of space is encoded on its two-dimensional boundary, at a density of roughly one bit per Planck area (~10-70 m²).
This means the universe has a finite information capacity. It is not infinitely detailed. Reality has a resolution. And if reality's information is encoded on surfaces, not in volumes, then the three-dimensional world we perceive may be a projection — a hologram generated from information on a distant boundary.
The holographic principle connects information entropy to simulation theory and to the block universe: a finite-information universe is, by definition, computable. And a universe that's computable is, at least in principle, simulable.
Here is a rigorous way to think about synchronicity without invoking mysticism.
In information theory, the Kolmogorov complexity of a sequence is the length of the shortest computer program that can produce it. A random sequence is incompressible — the shortest program is as long as the sequence itself. A patterned sequence is compressible — a short program can generate a long output.
Now apply this to events in the world:
The compression framework reframes synchronicity as an information-theoretic phenomenon rather than a causal or mystical one:
If the universe is genuinely more compressible than random — if events share more mutual information than chance allows — then either (a) there are hidden causal connections we haven't found, (b) the universe is structured by a deeper order (laws, design, simulation), or (c) the relationship between information and reality is more fundamental than physics currently acknowledges.
All three options lead back to the central question of this project: what is the relationship between consciousness — the thing that detects and processes information — and the universe that appears to be made of it?
In the 1920s, Swiss psychiatrist Carl Jung noticed a pattern in his clinical work: patients would report inner psychological experiences that coincided — in timing and meaning — with external events, with no apparent causal connection.
He called this synchronicity: "the simultaneous occurrence of a certain psychic state with one or more external events which appear as meaningful parallels to the momentary subjective state."
Key characteristics:
Jung developed his theory in correspondence with Wolfgang Pauli, a Nobel Prize-winning physicist and one of the founders of quantum mechanics. Their collaboration, published in 1952 as The Interpretation of Nature and the Psyche, was an unusual meeting of physics and depth psychology.
Pauli was drawn to the idea because quantum mechanics had already shattered classical causality:
Pauli and Jung proposed the concept of the psychoid archetype — a level of reality deeper than both mind and matter, where the two become indistinguishable. Synchronistic events, in their view, are eruptions from this deeper layer into conscious awareness.
Perhaps the most chilling example of apparent synchronicity in literary history:
In 1838, Edgar Allan Poe published The Narrative of Arthur Gordon Pym of Nantucket, his only complete novel. In the story, four shipwreck survivors are adrift with no food or water. Desperate, they draw straws to decide who will be sacrificed and eaten. The losing straw falls to a man named Richard Parker. He is stabbed to death, and the others survive by consuming his body.
It was fiction. Then it wasn't.
In 1884 — forty-six years later — the yacht Mignonette sank in the South Atlantic. Four men escaped in a lifeboat with no provisions: Captain Tom Dudley, first mate Edwin Stephens, seaman Edmund Brooks, and a 17-year-old cabin boy named Richard Parker.
After nearly three weeks adrift, Parker — who had drunk seawater and fallen gravely ill — slipped into a coma. On July 25, Captain Dudley stabbed the boy in the throat with a penknife. The three survivors fed on his body for four more days until a German ship rescued them.
The parallels are exact: the same name, a shipwreck, starvation, sacrifice at sea. The case of R v. Dudley and Stephens became a landmark in criminal law, establishing that necessity is not a defense to murder. Dudley and Stephens were convicted but their death sentences were commuted to six months — public sympathy was overwhelming.
The skeptical explanation is probability: "Richard Parker" was a common English name; shipwrecks and survival cannibalism were not rare in the 19th century; Poe drew on real accounts of maritime disaster. Given enough stories and enough shipwrecks, a name collision was bound to happen eventually.
The Jungian reading is different: Poe, channeling the collective unconscious, wrote a story that resonated with an archetypal pattern — and reality later echoed that pattern with uncanny precision. Not causation, but meaningful correlation between psyche and world.
Neither explanation is fully satisfying. The skeptical account explains that it could happen, but not the visceral sense that it shouldn't have. The Jungian account captures the feeling, but offers no mechanism. The coincidence sits in the gap between probability and meaning — which may be the same gap as the hard problem itself.
In 1898, former merchant sailor Morgan Robertson published a novella called Futility, or the Wreck of the Titan. It told the story of the largest ocean liner ever built — declared "practically unsinkable" — that strikes an iceberg in the North Atlantic on an April night and sinks with catastrophic loss of life, largely because it carries far too few lifeboats.
In 1912, the RMS Titanic did exactly that.
The parallels are not vague. They are specific, detailed, and numerous:
| Detail | Titan (1898 fiction) | Titanic (1912 reality) |
|---|---|---|
| Name | Titan | Titanic |
| Described as | "Unsinkable" | "Practically unsinkable" |
| Length | 800 ft | 882 ft |
| Tonnage | 45,000 tons | 46,328 tons |
| Propulsion | Triple-screw | Triple-screw |
| Speed at impact | 25 knots | 22.5 knots |
| Passenger capacity | ~3,000 | ~3,300 |
| Lifeboats | 24 (less than half needed) | 20 (less than half needed) |
| Month of sinking | April | April |
| Cause | Iceberg, starboard side | Iceberg, starboard side |
| Location | North Atlantic, ~400 mi from Newfoundland | North Atlantic, ~400 mi from Newfoundland |
Robertson was an experienced seaman who had served in the merchant marine from 1877 to 1886, rising to first mate. After the Titanic sank, some credited him with clairvoyance. He denied it, and scholars generally attribute the similarities to his deep knowledge of shipbuilding trends — he extrapolated where the industry was heading and got remarkably close.
But the skeptical explanation, while reasonable, doesn't fully account for the density of matching details: the near-identical name, the tonnage within 3%, the same month, the same side of the ship, the same distance from Newfoundland, the same fatal arrogance about lifeboats. Robertson didn't predict one thing — he predicted a constellation of things, fourteen years before they happened together.
In Gulliver's Travels (1726), Jonathan Swift described the astronomers of Laputa discovering two small moons orbiting Mars. He specified their orbital distances (3 and 5 Martian diameters from center) and periods (10 hours and 21.5 hours).
151 years later, in 1877, American astronomer Asaph Hall discovered Phobos and Deimos — two small moons of Mars. Their actual orbital distances are 1.4 and 3.5 Martian diameters; their periods are 7.66 and 30.35 hours.
Swift's numbers weren't precise, but they were in the right order of magnitude — and he got the number of moons exactly right. The standard explanation: Kepler had conjectured Mars might have two moons (Earth had one, Jupiter had four — the geometric progression suggested two for Mars). Swift likely drew on this. But Kepler never described their orbits. Swift did, and he wasn't far off.
Samuel Clemens was born on November 30, 1835 — two weeks after Halley's Comet reached perihelion. In 1909, he wrote:
"I came in with Halley's Comet in 1835. It is coming again next year, and I expect to go out with it. It will be the greatest disappointment of my life if I don't go out with Halley's Comet. The Almighty has said, no doubt: 'Now here are these two unaccountable freaks; they came in together, they must go out together.'" — Mark Twain, 1909
He died on April 21, 1910 — one day after the comet's 1910 perihelion. He predicted his own death, anchored to an astronomical event with a 75-year cycle, and was accurate to within 24 hours.
The Lincoln-Kennedy coincidences are perhaps the most widely circulated synchronicity list in popular culture. Some are genuinely striking:
But this case is equally important as a demonstration of confirmation bias. Many items on the famous list are exaggerated or false — Lincoln never had a secretary named Kennedy. The list cherry-picks matching details and ignores the thousands of non-matches. Martin Gardner's debunking in Scientific American showed how easy it is to construct similar "impossible" coincidence lists for any two comparable figures.
The Lincoln-Kennedy case is valuable precisely because it cuts both ways. It shows that real coincidences exist and that the human mind inflates them. Any honest investigation of synchronicity must hold both truths simultaneously.
The Richard Parker case, the Titan/Titanic case, and Swift's moons share a structure:
Compare this with the Jim Twins. Jim Lewis and Jim Springer's parallel lives — same names, same wives, same dogs, same habits — are scientifically explained by shared genetics. But like the literary coincidences, the sheer density of matching details exceeds what any single explanatory framework comfortably absorbs. Genetics explains some personality convergence, but both twins naming their dog "Toy"?
In all three cases, the individual explanation is plausible. The aggregate is what unsettles us. And our inability to articulate why the aggregate feels different from the sum of its parts may itself be a manifestation of the hard problem — the gap between objective probability and subjective meaning.
Synchronicity remains outside mainstream science. The standard scientific response to "meaningful coincidences" involves:
These are valid criticisms. But the concept has proved difficult to fully dismiss, for a deeper reason: synchronicity is not really a claim about the external world. It is a claim about the relationship between consciousness and the world — and that relationship remains genuinely mysterious. If the hard problem is unsolved, we cannot confidently declare that mind and matter are fully independent domains.
The Jim Twins — identical twins who independently named their dogs Toy, married Lindas and Bettys, and vacationed at the same beach — look like a textbook case of synchronicity. Two lives running on parallel tracks with no causal connection between the individual choices.
Modern science explains this through genetics: shared DNA producing shared predispositions. But this explanation, while powerful, only pushes the mystery deeper. Why does a particular arrangement of nucleotides produce a preference for the name "Toy"? The causal chain from gene to behavior passes through consciousness — through subjective preferences, felt attractions, moments of "choice" — and we do not understand that step.
The Jim Twins sit at the intersection of genetics and synchronicity — where scientific explanation is strong but incomplete, and where the relationship between determinism and meaning remains an open question.
The synchronicity cases above raise the question of whether mind and matter are as separate as classical physics assumes. Quantum mechanics — the physics of the very small — raises the same question from the opposite direction, and with far more experimental rigor.
The foundation of quantum weirdness. Fire photons (or electrons) one at a time through two slits. If no one checks which slit each photon goes through, the photons build up an interference pattern on the detector — as if each photon were a wave passing through both slits simultaneously.
But if you add a detector to determine which slit each photon passes through — even gently, even after it's already passed through — the interference pattern vanishes. The photons behave like particles, going through one slit or the other.
The act of acquiring "which-path" information fundamentally changes the outcome. The photon seems to "know" whether it's being watched.
If the double-slit experiment is strange, the delayed-choice quantum eraser — performed by Kim, Kulik, Shih, and Scully in 1999, building on proposals by Scully and Drühl (1982) and John Archibald Wheeler's delayed-choice thought experiments (1978) — is genuinely unsettling.
The setup:
The critical detail: the idler photon arrives at its detector after the signal photon has already hit the screen. The "choice" to erase or preserve which-path information is made after the signal photon's journey is complete.
The result:
The signal photon's behavior on the screen — wave-like or particle-like — correlated with a measurement made on its entangled partner that hadn't happened yet when the signal photon hit the detector.
John Archibald Wheeler pushed the delayed-choice concept to cosmological extremes. Imagine light from a distant quasar, gravitationally lensed around a galaxy so it takes two paths to reach Earth — like a cosmic double slit. The photon has been traveling for billions of years.
Wheeler pointed out that a decision made today on Earth — whether to measure which path the photon took or to erase that information — would determine whether the photon had traveled as a wave (both paths) or a particle (one path) for billions of years before the Earth existed.
"We have a strange inversion of the normal order of time. We, now, by moving the mirror in or out, have an unavoidable effect on what we have a right to say about the already past history of that photon." — John Archibald Wheeler
The delayed-choice quantum eraser is frequently misinterpreted. Some important clarifications:
What the experiment does show is deeply strange enough without mystification:
The delayed-choice eraser doesn't prove that consciousness is fundamental to physics. But it creates an opening that won't close:
If the universe keeps track of what information is available — not just what has been observed but what could be observed — then information, knowledge, and observation are not afterthoughts in physics. They are woven into the fabric of how reality behaves.
This is exactly what Pauli and Jung sensed: that the clean separation between "objective matter" and "subjective mind" may be an artifact of classical thinking. Quantum mechanics doesn't tell us the mind creates reality. But it tells us reality cares about what can be known. And "what can be known" is, ultimately, a question about consciousness.
The hard problem asks why matter produces experience. Quantum mechanics asks a mirror question: why does the behavior of matter depend on whether experience of it is possible?
The delayed-choice quantum eraser suggests that future measurements influence past behavior. Most physicists resist that conclusion. But one rigorous theoretical framework takes it seriously — and produces the same predictions as standard quantum mechanics while doing so.
In standard quantum mechanics, a system is described by a single state vector (the wave function) that evolves forward in time from an initial preparation. The past determines the present; the present determines the future. Causality flows one way.
In 1964, physicists Yakir Aharonov, Peter Bergmann, and Joel Lebowitz proposed something different (building on earlier work by Satosi Watanabe in 1955). They described a quantum system using two state vectors:
At any moment between those two measurements, the full description of the system requires both vectors. The present is determined by the past and the future, taken together.
This is the Two-State Vector Formalism (TSVF).
In TSVF, the double-slit experiment looks different:
The particle's behavior at the slits isn't determined by the past alone. It's shaped by where it will end up. The future participates in the present.
Crucially, TSVF produces exactly the same predictions as standard quantum mechanics. No experiment can distinguish between them. The difference is interpretive: TSVF says the math is telling us something about the structure of time itself.
In 1988, Aharonov, along with David Albert and Lev Vaidman, discovered a striking consequence of TSVF: weak values.
In standard quantum mechanics, a measurement either gets a definite result (collapsing the wave function) or gets nothing. There's no middle ground. But Aharonov's team showed that if you make a measurement that is extremely gentle — so weak that it barely disturbs the system — you can extract a new kind of information: the weak value.
Weak values are strange:
Weak values have been experimentally confirmed. They are not just theoretical curiosities — they have been measured in optics labs, used to amplify tiny signals, and applied to precision measurement technologies. They are real, observable quantities that only make sense if the future boundary condition is part of the physics.
TSVF does not mention consciousness. It is rigorous physics, not philosophy. But its implications run directly into the deepest questions about mind and reality:
"We believe that the two-state vector formalism is, in some sense, the 'natural' language of quantum mechanics." — Yakir Aharonov and Lev Vaidman
If TSVF is correct — and experimentally, it is indistinguishable from standard quantum mechanics — then reality is not built on a one-way flow from past to future. It is built on a negotiation between past and future. And consciousness, which experiences only the forward direction, may be seeing only half the picture.
The delayed-choice experiments and TSVF were developed by or inspired by one physicist more than any other: John Archibald Wheeler. In 1989, Wheeler proposed a radical synthesis of everything these experiments imply.
Wheeler coined the phrase "It from Bit" to capture a single, sweeping idea: every physical thing — every particle, every field of force — derives its existence from information.
"Every it — every particle, every field of force, even the spacetime continuum itself — derives its function, its meaning, its very existence entirely from binary choices, bits, yes-or-no indications."
— John Archibald Wheeler, 1989
In Wheeler's vision, the universe is not a machine made of matter that happens to contain observers. It is a participatory phenomenon: observation — the registration of information — is not something that happens within reality. It is what constitutes reality.
Wheeler pushed this further with what he called the Participatory Anthropic Principle: observers are necessary for the universe to come into existence. Not just in the weak sense that we can only observe a universe compatible with our existence, but in the strong sense that the universe requires observers to actualize itself.
This sounds mystical, but Wheeler grounded it in physics:
The universe, in this view, is a self-excited circuit: it creates observers who, by observing, create the universe. A strange loop between consciousness and cosmos.
Wheeler's idea was not just philosophy. It helped launch the field of quantum information science — the recognition that information is not just something we know about physical systems, but something fundamental to physical systems. This insight led directly to quantum computing, quantum cryptography, and quantum teleportation.
The connection to consciousness: if the universe is fundamentally informational, and if consciousness is fundamentally about information processing, then consciousness is not an accidental byproduct of matter. It is the kind of thing the universe is made of.
TSVF treats the future as real enough to send state vectors backward through time. Wheeler's participatory universe treats observation as constitutive of reality. But there is an older, simpler idea from physics that may be the most radical of all.
Einstein's theory of relativity revealed something that most people still haven't fully absorbed: there is no universal "now."
Due to the relativity of simultaneity, observers moving at different speeds disagree about which events are simultaneous. There is no physical basis for preferring one observer's "now" over another's. If your "now" is someone else's "past" or "future," then past, present, and future cannot be fundamentally different kinds of thing.
This leads to eternalism (also called the block universe): the view that all moments in time — past, present, and future — exist equally. The universe is a four-dimensional block of spacetime. Nothing "flows." Nothing "passes." The experience of time moving forward is something that happens within the block, not to it.
If the block universe is correct, the future already exists. Your decisions at 3pm tomorrow are as fixed as your memories of yesterday. The feeling that the future is "open" and the past is "closed" is an artifact of how consciousness moves through the block — or more precisely, an artifact of how different slices of the block contain memories and not premonitions.
This connects directly to the free will debate. In a block universe:
The block universe's deepest challenge is to consciousness itself. If all moments exist simultaneously, why do we experience time passing? Why does consciousness feel like a spotlight moving through a dark room, illuminating one moment at a time, when the whole room is already lit?
This is, arguably, another face of the hard problem. Physics describes a static four-dimensional structure. Consciousness experiences a dynamic three-dimensional flow. The gap between these descriptions is not a gap in our knowledge of physics or neuroscience. It is the gap between the objective and the subjective — the same gap Chalmers identified, now applied to time itself.
Einstein, after the death of his friend Michele Besso, wrote: "The distinction between past, present, and future is only a stubbornly persistent illusion." If he was right, consciousness is the thing that creates that illusion. And understanding why it does so may be the key to understanding what it is.
The laws of physics contain a set of fundamental constants — the strength of gravity, the mass of the electron, the cosmological constant, and others. These constants appear to be fine-tuned for the existence of complex structures, and ultimately for the existence of conscious life:
The range of values compatible with conscious life is extraordinarily narrow. This is the fine-tuning problem.
If there are vast numbers of universes with different constants, it's not surprising that we find ourselves in one compatible with our existence. We couldn't observe a universe that didn't support observers. This is the weak anthropic principle.
The constants were set by a designer. This is the traditional theistic response. It explains fine-tuning but raises the question of who designed the designer.
Philosopher Philip Goff proposes a radical alternative: the universe itself is conscious, and the fine-tuning reflects the universe's own purposive nature. Under cosmopsychism, consciousness is not something that emerged within the universe — it is a fundamental property of the universe itself, and human consciousness is derived from it.
This inverts the standard scientific picture. Instead of: matter → brains → consciousness, cosmopsychism proposes: cosmic consciousness → matter → human consciousness.
It sounds exotic, but Goff argues it avoids the problems of both the multiverse (unfalsifiable) and design (infinite regress) while building on the philosophical foundations of panpsychism.
Whether or not you find cosmopsychism persuasive, the fine-tuning problem makes a genuine point: consciousness is not an accident of physics. It is enabled by the deepest structure of reality.
The constants of the universe permit stars, which forge heavy elements, which form planets, which develop chemistry, which produces life, which evolves brains, which generate consciousness. This chain is not one among many possible chains — it is extraordinarily improbable without fine-tuning.
The universe doesn't just contain consciousness. It appears to be built for it. Whether that reflects design, selection, or something we haven't yet imagined is one of the deepest open questions in science and philosophy.
Is there an experiment that tests synchronicity directly? One project has tried for over 25 years.
The Global Consciousness Project (GCP), based at Princeton University and led by psychologist Roger Nelson, maintains a network of 70+ hardware random number generators (RNGs) distributed around the world. These devices continuously produce streams of random bits — electronic coin flips with no physical connection to each other.
The hypothesis: when major global events synchronize human attention and emotion — New Year's celebrations, natural disasters, terrorist attacks — the output of these physically independent RNGs will show statistically significant deviations from randomness.
After 15+ years of data accumulation, the GCP reports a composite statistic showing a 7-sigma departure from expectation — corresponding to odds of roughly 1 in a trillion that the correlation between RNG deviations and global events is mere chance.
Events showing the strongest effects include:
The GCP remains deeply controversial. Criticisms include:
But the GCP is worth including here for what it attempts, regardless of whether it succeeds. It is the only large-scale, long-running experiment designed to test whether consciousness has effects on the physical world at a global scale. If Jung's synchronicity has any empirical basis, this is where it would show up. The jury is still out.
Entropy, synchronicity, quantum mechanics, and cosmology all converge on a single provocation: consciousness is not a minor epiphenomenon floating on top of a fundamentally mindless universe. It is woven into the fabric of reality — into the arrow of time, into the behavior of particles, into the laws of physics, and possibly into the structure of matter itself.
Wheeler's "It from Bit" says the universe is made of information. The block universe says all times exist equally. TSVF says the future shapes the past. The quantum eraser says what can be known determines what happens. Fine-tuning says the constants of nature are set for minds to emerge. The GCP asks whether minds, once emerged, reach back into the physical world.
Whether this means the universe is literally conscious, or that consciousness is an inevitable consequence of physical law, or that something we don't yet understand connects mind and matter at a fundamental level — the question is wide open. Consciousness is not just a problem for neuroscience. It is a problem for cosmology.