In the lap of the gods

Whether you flip a coin or roll the dice, the outcome is utterly unpredictable. Or so we like to think. We rely on randomness for cryptography, engineering, physics — and to explain the workings of some ecosystems. But is it quite what it seems? In this 10-page investigation, we take a closer look at random events and the part they play in our world, from quantum theory to coincidence. Ian Stewart kicks things off with a provocative question: is randomness anything more than an invention of our superstitious minds?

THE human brain is wonderful at spotting patterns. It’s an ability that is one of the foundation stones of science. When we notice a pattern, we try to pin it down mathematically, and then use the maths to help us understand the world around us. And if we can’t spot a pattern, we don’t put its absence down
to ignorance. Instead we fall back on our favourite alternative. We call it randomness.

We see no patterns in the tossing of a coin, the rolling of dice, the spin of a roulette wheel, so we call them random. Until recently we saw no patterns in the weather, the onset of epidemics or the turbulent flow of a fluid, and we called them random too. It turns out that “random” describes several different things: it may be inherent, or it may simply reflect human ignorance.

Little more than a century ago, it all seemed straightforward. Some natural phenomena were ruled by physical laws: the orbits of the planets, the rise and fall of the tides. Others were not: the pattern of hailstones on a path, for example. The first breach in the wall between order and chaos was the discovery by Adolphe Quetelet around 1870 that there are statistical patterns in random events. The more recent discovery of chaos – apparently random behaviour in systems ruled by rigid laws – demolished parts of the wall completely. Whatever the ultimate resolution of order and chaos may be, they cannot be simple opposites.

Yet we still can’t seem to resist the temptation of discussing real-world processes as if they are either ordered or random. Is the weather truly random or does it have aspects of pattern? Do dice really produce random numbers or are they in fact deterministic? Physicists have made randomness the absolute basis of quantum mechanics, the science of the very small: no one, they say, can predict when a radioactive atom
will decay. But if that is true, what triggers the event? How does an atom “know” when to decay? To answer these questions, we must sort out what kind of randomness we are talking about.

Is it a genuine feature of reality or an artefact of how we model reality?

Let’s start with the simplest ideas. A system can be said to be random if what it does next does not depend upon what it has done in the past. If I toss a “fair” coin and get six heads in a row, the seventh toss can equally well be heads or tails. Conversely, a system is ordered if its past history affects its future in a predictable way. We can predict the next sunrise to within fractions of a second, and every morning we are right. So a coin is random but sunrise is not.

The pattern of sunrise stems from the regular geometry of the Earth’s orbit. The statistical pattern of a random coin is more puzzling. Experiments show that in the long run, heads and tails turn up equally often, provided the coin is fair. If we think of the probability of an event as the proportion of times that it happens in a long series of trials, then both heads and tails have probability ½. That’s not actually how probability is defined, but it is a simple consequence of the technical definition, called the law of large numbers.

The way coin tosses even out in the long run is a purely statistical feature of large numbers of tosses (see “The law of averages”). A deeper question, with a far more puzzling answer, is: how does the coin “know” that it should be equally likely to come down heads as tails? The answer, when you look more closely, is that a coin is not a random system at all. We can model the coin as a thin, circular disc. If the disc is launched vertically with a known speed and a known rate of rotation we can work out exactly how many half-turns it will make before it hits the floor and comes to rest. If it bounces, the calculation is harder but in principle it can be done. A tossed coin is a classical mechanical system. It obeys the same laws of motion and gravity that make the orbits of planets predictable. So why isn’t the coin predictable?

Well, it is – in principle. In practice, however, you don’t know the upward speed or the rate of spin, and it so happens that the outcome is very sensitive to both. From the moment you toss a coin – ignoring wind, a passing cat and other extraneous features – its fate is determined. But because you don’t know the speed or the rate of spin, you have no idea what that inevitable fate is, even if you are incredibly quick at doing the sums.

A dice is the same. You can model it as a bouncing cube whose behaviour is mechanical and is governed by deterministic equations. If you could monitor the initial motion accurately enough and do the sums fast enough you could predict the exact result. Something along these lines has been done for roulette. The prediction is less precise – which half of the wheel the ball will end up in – but that’s good enough to win, and the results don’t have to be perfect to take the casino to the cleaners.

When Albert Einstein questioned the randomness of quantum mechanics, refusing to believe that God throws dice, he chose entirely the wrong metaphor. He should have believed that God does play dice. Then he could have asked how the dice behave, where they are located, and what the real source of quantum “randomness” is.

There is, however, a second layer to the problem. The difficulty in predicting the roll of a dice is not just caused by ignorance of the initial conditions. It is made worse by the curious nature of the process: it is chaotic. Chaos is not random, but the limitations on the accuracy of any measurement we can make means it is unpredictable. In a random system, the past has no effect on the future. In a chaotic system, the past does have an effect on the future but the sums that ought to let us work out what the effect will be are extremely sensitive to tiny observational errors. Any initial error, however small, grows so rapidly that it ruins the prediction.

A tossed coin is a bit like that: a large enough error in measuring the initial speed and spin rate will stop us knowing the outcome. But a coin is not truly chaotic, because that error grows relatively slowly as the coin turns in the air. In a genuinely chaotic system, the error grows exponentially fast. The sharp corners of dice, which come into play when the perfect mathematical cube bounces off the flat table top, introduce this kind of exponential divergence. So dice seem random for two reasons: human ignorance of initial conditions as with the coin, and chaotic (though deterministic) dynamics.

Model behaviour

Everything I have described so far has depended on the mathematical model that was chosen to describe it. So does the randomness, or not, of a given physical system depend on the model you use?

To answer that, let’s take a look at the first great success of random models in physics: statistical mechanics. This theory underpins thermodynamics – the physics of gases – which was to some extent motivated by the need to make more efficient steam engines. How efficient can a steam engine get? Thermodynamics imposes very specific limits.

In the early days of thermodynamics, attention was directed at large-scale variables like volume, pressure, temperature and quantities of heat. The so-called “gas laws” connect these variables. For instance, Boyle’s law says that the pressure of a sample of gas multiplied by its volume is constant at any given temperature. This is an entirely deterministic law: given the volume you can calculate the pressure, or vice versa.

However, it soon became apparent that the atomic-scale physics of gases, which underlies the gas laws, is effectively random: molecules of gas bounce erratically off each other. Ludwig Boltzmann was the first to explore how bouncing molecules, modelled as tiny hard spheres, relate to the gas laws (and much else). In his theory, the classical variables – pressure, volume and temperature – appeared as statistical averages that assumed an inherent randomness. Was this assumption justified? Just as coins and dice are at root deterministic, so is a system composed of vast numbers of tiny hard spheres. It is cosmic snooker, and each ball obeys the laws of mechanics. If you know the initial position and velocity of every sphere, the subsequent motion is completely determined. But instead of trying to follow the precise path of every sphere, Boltzmann assumed that the positions and speeds of the spheres have a statistical pattern that is not skewed in favour of any particular direction. Pressure, for example, is a measure of the average force exerted when the spheres bounce off the inner walls of their container, assuming that the spheres are equally likely to be travelling in any direction.

Statistical mechanics couches the deterministic motion of a large number of spheres in terms of statistical measures, such as an average. In other words, it uses a random model on the microscopic level to justify a deterministic model on the macroscopic level. Is that fair?

Yes it is, though Boltzmann didn’t know it at the time. He effectively made two assertions: that the motion of the spheres is chaotic, and that the chaos is of a special kind that gives a well-defined average state. A whole branch of mathematics, ergodic theory, grew from these ideas, and the mathematics has advanced to the stage where Boltzmann’s hypothesis is now a theorem.

The change of viewpoint here is fascinating. An initially deterministic model (the gas laws) was refined to a random one (tiny spheres), and the randomness was then justified mathematically as a consequence `of deterministic dynamics.

So are gases really random or not? It all depends on your point of view. Some aspects are best modelled statistically, others are best modelled deterministically. There is no one answer, it depends on the context. This situation is not at all unusual. For some purposes – calculating the airflow over the space shuttle, for example – a fluid can be considered as a continuum, obeying deterministic laws. For other purposes, such as Brownian motion – the erratic movement of suspended particles caused by atoms bouncing into them – the atomic nature of the fluid must be taken into account and a Boltzmann-like model is appropriate.

So we have two different models with a mathematical link between them. Neither is reality, but both describe it well. And it doesn’t seem to make any sense to say that the reality is or is not random: randomness is a mathematical feature of how we think about the system, not a feature of the system itself.

Quantum roots

So is nothing truly random? Until we understand the roots of the quantum world, we can’t say for sure. In its usualinterpretations, quantum mechanics asserts that deep down, on the subatomic level, the universe is genuinely and irreducibly random. It is not like the hard-spheres model of thermodynamic randomness, which traces the statistical features to our(unavoidable) ignorance of the precise state of all the spheres. There is no analogous small-scale model with a few parameters that, if we could only see them, would unlock the mystery. The “hidden variables”, whose deterministic but chaotic behaviour governs the throw of the quantum dice, simply don’t exist. Quantum stuff is random, period. Or is it?

There is certainly a mathematical argument to justify such an assertion. In 1964 John Bell came up with a way of testing whether quantum mechanics is random or governed by hidden variables – essentially, quantum properties that we have not yet learned how to observe. Bell’s work was centred on the idea of two quantum particles, such as electrons, that interact and are then separated over vast distances. Perform a particular set of measurements on these widely separated particles and you should be able to determine whether their properties are underpinned by randomness or in thrall to hidden variables. The answer is important: it dictates whether quantum systems that have interacted in the past are subsequently able to influence each other’s properties – even if they are at opposite ends of the universe.

As far as most physicists are concerned, experiments based on Bell’s work have confirmed that, in quantum systems, randomness – and the bizarre “action at a distance” – rules. Indeed, so keen are they to put over the fundamental role of randomness in quantum theory that they tend to dismiss any attempt to question it further. This is a pity, because Bell’s work, though brilliant, is not as conclusive as they imagine.

The issues are complex, but the basic point is that mathematical theorems involve assumptions. Bell makes his main assumptions explicit, but the proof of his theorem involves some implicit assumptions too, something that is not widely recognised. Tim Palmer, a meteorologist at the European Centre for Medium-Range Weather Forecasts in Reading in the UK who trained as a physicist, has published a paper in which he explains these implicit assumptions (Philosophical Transactions of the Royal Society A, vol 451, p 585). His paper also shows that the observed properties of the quantum world are consistent with deterministic hidden-variable theories that allow only “local” influence, rather than an ability to influence systems from the other side of the universe.

The loopholes are technical. For example, one is the implicit assumption that certain correlations between the spin states of distinct particles, computed as integrals, actually are computable, when in fact they may not be. Another is the precise role of the hidden variables, which need not be what Bell assumes. A third is that Bell’s proof involves “counterfactuals”, discussions of what would have happened if an experiment had been performed under different circumstances. There is no way to test a counterfactual, which conventionally should render it “unscientific”.

So, despite the vast weight of opinion, the door is still open for a deterministic explanation of quantum indeterminacy. The devil, as always, is in the detail. It may be difficult, or even impossible to test such a theory, but we can’t know that until someone writes it down. It may not change quantum mechanics much, any more than hard spheres changed thermodynamics. But it would give us an entirely new insight into many puzzling questions. And it would put quantum theory back among all the other statistical theories of science: random from some points of view, deterministic from others.

So quantum stuff apart, we can state with assurance that there really is no such thing as randomness. Virtually all
apparently random effects arise not because nature is genuinely unpredictable but because of human ignorance or other limitations on possible knowledge of the world. Thisinsight is not new. Alexander Pope, in his Essay on Man, wrote: “All nature is but art, unknown to thee/ All chance, direction which thou canst not see/ All discord, harmony not understood/ All partial evil, universal good.” Apart from the bit about good and evil, mathematicians now understand precisely why he was right. Autor: Ian Stewart
Fuente: new

No Comments

Post a Comment