top of page

Teetering on the Edge of Chaos

  • Shashank Singh Yadav
  • Sep 10, 2020
  • 4 min read
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.” ~Marquis Pierre Simon de Laplace

Thus, Laplace postulated a “demon”, a superintelligence that could know the positions, velocities, and forces on all the particles in the universe at one time, and thus know the universe for all times.

Physics bestows us the power to predict things. Laws of Gravitation allows us to predict eclipses thousands of years in advance and predict the trajectories of planets with almost certainty. Yet, when we look around, we find extremely complex systems that are almost impossible to predict. The simplest systems are now seen to create extraordinarily difficult problems of predictability. Only a new kind of science could begin to cross the great gulf between knowledge of what one thing does—one water molecule, one cell of heart tissue, one neuron—and what millions of them do.

Chaos theory is a part of mathematics. It looks at certain systems that are very sensitive. A very small change may make the system behave completely differently. Very small changes in the starting position of a chaotic system make a big difference after a while. Measurements could never be perfect. Scientists marching under Newton’s banner believed an approximate knowledge of a system’s initial conditions and an understanding of natural law, one can calculate the approximate behaviour of the system. This assumption lay at the philosophical heart of science.

As one theoretician liked to tell his students: “The basic idea is that you don’t have to take into account the falling of a leaf on some planet in another galaxy when you’re trying to account for the motion of a billiard ball on a pool table on earth. Very small influences can be neglected. There’s a convergence in the way things work, and arbitrarily small influences don’t blow up to have arbitrarily large effects.” Classically, the belief in approximation and convergence was well justified. It worked. A tiny error in fixing the position of Comet Halley in 1910 would only cause a tiny error in predicting its arrival in 1986, and the error would stay small for millions of years to come. Computers rely on the same assumption in guiding spacecraft: approximately accurate input gives approximately accurate output. Economic forecasters rely on this assumption, though their success is less apparent. So, did the pioneers in global weather forecasting.


When Lorenz Discovered the Butterfly Effect


On a winter day 50 years ago, Edward Lorenz, SM ‘43, ScD ’48, meteorology professor at MIT, entered some numbers into a computer program simulating weather patterns and then left his office to get a cup of coffee while the machine ran. When he returned, he noticed a result that would change the course of science.

The computer model was based on 12 variables, representing things like temperature and wind speed, whose values could be depicted on graphs as lines rising and falling over time. On this day, Lorenz was repeating a simulation he’d run earlier—but he had rounded off one variable from .506127 to .506. To his surprise, that tiny alteration drastically transformed the whole pattern his program produced, over two months of simulated weather.

The unexpected result led Lorenz to a powerful insight about the way nature works: small changes can have large consequences. The idea came to be known as the “butterfly effect” after Lorenz suggested that the flap of a butterfly’s wings might ultimately cause a tornado. And the butterfly effect, also known as “sensitive dependence on initial conditions”, has a profound corollary: forecasting the future can be nearly impossible.

The discovery of chaos has created a new paradigm in scientific modelling. On one hand, it implies new fundamental limits on the ability to make predictions. On the other hand, the determinism inherent in chaos implies that many random phenomena are more predictable than had been thought. Random-looking information gathered in the past—and shelved because it was assumed to be too complicated—can now be explained in terms of simple laws. Chaos allows order to be found in such diverse systems as the atmosphere, dripping faucets, and the heart. The result is a revolution that is affecting many different branches of science.

Laplace Demon is impossible, and for two distinct reasons. The old reason was that modern quantum physics is inherently indeterministic. The future is only probabilistic, though it may be “adequately determined.” The new reason is that there is not enough information in the past (none at all in the early universe) to determine the present.

Relativity eliminated the Newtonian illusion of absolute space and time; quantum theory eliminated the Newtonian dream of a controllable measurement process; and chaos eliminates the Laplacian fantasy of deterministic predictability.


Comments


Subscribe Form

Thanks for submitting!

©2020 by KMC Astro Club

bottom of page