Showing changes from revision #35 to #36:
Added | Removed | Changed
This page is a blog article in progress, written by David Tanzer. To see discussions of this article while it was being written, visit the Azimuth Forum. Please remember that blog articles need HTML, not Markdown.
guest post by David Tanzer</i>
At the Azimuth Code Project, we are working to produce educational software that is relevant to the Earth sciences and the study of climate. Here we will look into a stochastic resonance demonstration program, by Allan Erskine and Glyn Adgie. Stochastic resonance is a widely studied phenomenon, which has an application to the theory of ice-age cycles. After explaining how to run the program, we’ll sketch out some of the math and science background, and then move on to dissect the program and its algorithm.
The Azimuth models are interactive web pages. Their behavior is responsive, because the code runs right in the browser, as javascript. They make use of a high-level support library (JSXGraph), which allows them to focus logic of the models rather than the presentation mechanisms of the browser.
Start by opening the stochastic resonance model web page. On the same plot it shows a sine wave, called the forcing signal, and a chaotic time-series, called the output signal. There are four sliders, which we’ll call Amplitude, Frequency, Noise and Sample-Path.
• The Amplitude and Frequency sliders control the sine wave. Try them out.
• The output signal depends, in a complex way, on the sine wave. Vary Amplitude and Frequency to see how they affect the output signal.
• The amount of randomization involved in the process is controlled by the Noise slider. Verify this.
• Change the Sample-Path slider to get a different instance of the process.
The program runs a discrete approximation for a “stochastic differential equation” (SDE), which is a specification for the derivative of the output signal as a function of time, the current signal value, and a noise process.
The program consists of the following components:
Interactive controls to set parameters
Plot of the forcing signal (sine curve)
Plot of the output signal
A function which defines a particular SDE. “Stochastic resonance” is a characteristic of the solutions to this equation.
A general simulator for SDE’s, based on the Euler method
I would like everyone now to locate the source code, as follows:
• Open the web page for the model. The code is now downloaded and running in your browser!
• While there, run your browser’s view-source function. For Firefox on the Mac, it’s Apple-U, for Firefox on the PC it’s click the right mouse or touchpad button then select “View Page Source” from the drop-down menu.
• You should see the html for the web page itself.
• Note the following lines at the head of the file, which refer to javascript programs on the web:
<script src='http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default'></script> <script src='http://cdnjs.cloudflare.com/ajax/libs/jsxgraph/0.93/jsxgraphcore.js'></script> <script src='./StochasticResonanceEuler.js'></script> <script src='./normals.js'></script>
• Each of these lines causes the browser to load the indicated program into its internal javascript interpreter. MathJax is an open-source formula rendering engine. JSXGraph is a cross-platform library for interactive graphics, function plotting, and data visualization. StochchasticResonanceEuler.js is the model itself, and normals.js makes a table of random numbers used in the calculation.
• Now, click on the link within the header line for StochasticResonanceEuler.js – and you’re there!
Your next challenge is to scan through StochasticResonanceEuler.js and associate its contents with the Program Inventory listed above. Try to put a blur lens over any items that look obscure – the goal here is only to form rough hypotheses about what might be going on.
Now let’s analyze the differential equation that is used in the model. It sets the deterministic part of the derivative to a time-varying sine wave plus a bistable function of the current signal value:
DerivNonrandom(t, x) = SineWave(t, amplitude, frequency) + Bistable(x),
where Bistable(x) = x * (1 - x^2).
Now let’s analyze the effects of these terms on the output signal.
Alone, the sine wave would cause the output signal to vary sinusoidally, with a 90 degree phase difference.
The bistable polynomial has roots at -1, 0 and 1. The root at 0 is an unstable equilibrium, and -1 and 1 are stable. The basin of attraction for -1 is all the negative numbers, the basin for 1 is the positive numbers, and the point of unstable equilibrium separates the basins.
We will view each basin as one of the states of a bistable system.
Now let’s put the sine wave and the bistable polynomial together. If the wave amplitude is not too large, the system will gravitate towards one of the attractors, and then continue to oscillate around it thereafter – so it never leaves the basin. But if the wave is large enough, the system will be pulled back and forth between the two basins; its state will resonate with the driving signal.
Now, let’s add the noise into the picture. Suppose the sine wave was large enough to periodically pull the system close to zero – but not enough for it to cross over. Then a well-timed noise event could push the system over to the other side. So there will be a phase-dependent probability of the noise triggering a state change. More noise will lead to transitions on more of the cycles. In the flip-flopping between states, we will see a stochastic reflection of the driving sine wave. Further noise will lead to multiple transitions within the cycles, thereby drowning out the signal.
Moral: under the right conditions, the noise may amplify the effect of the input signal.
The theory of the timing of the ice ages is a fascinating, challenging and open problem in science.
Note: here we are using the term ice age in the colloquial sense of being a period of maximal glaciation. The scientific term Ice Age refers to a major period of time that spans thousands of such “ice ages” and the warm periods between them. There have only been four such Ice Ages in the history of the Earth. Each one is characterized by a different timing pattern for the many ice ages that it contains.
A current hypothesis for the timing of the ice ages uses a stochastic resonance model. The climate is modeled as a multistable system, and the forcing results from certain slowly varying, cyclical changes in astronomical variables (such as the tilt of the Earth’s axis). These are known as Milankovitch cycles, and their periods are on the scale of 10,000 to 100,000 years. Note that in our current major Ice Age, the minor ice ages occur roughly every 100,000 years.
In the simplest model, the climate has two stable states: a cold, snowball Earth, and a hot, iceless Earth. Each state is self-reinforcing. A frozen Earth is white, so it doesn’t absorb much solar energy, which keeps it cold and frozen. A hot Earth is dark, so it absorbs a lot of solar energy, which keeps it hot and melted. Included in the model is the fact that the glaciers are concentrated in the northern latitudes – and hence the northern temperatures can trigger a change in the state of the climate.
There are three astronomical cycles that contribute to the forcing function:
• Changing of the eccentricity (ovalness) of the Earth’s orbit, with a period of 100 thousand years
• Changing of the obliquity (tilt) of the Earth’s axis, with a period of 41 thousand years
• Precession (rotary wobbling) of the Earth’s axis, with a period of 23 thousand years
These effects sum to produce a multi-frequency variation in the amount of solar energy reaching the northern latitudes. The induced temperature changes, however, are not large enough to trigger a state change. According to the stochastic resonance hypothesis, it is other, random variations in the heat received up north that may trigger the climate to change states. One such source of variation may be changes in the amount of heat-trapping gases in the atmosphere.
Note that the boost in atmospheric carbon dioxide introduced by human industry is pushing us further away from the tipping point to the next ice age. This issue was discussed on the Azimuth blog, on Dec. 4, 2012:
Arrow said:
Hopefully the extra CO2 is enough to avert the next ice age.
John Baez replied:
There’s been a lot of work on this, discussed very nicely here:
• Andrew Revkin, The next age and the Anthropocene, Dot Earth, 8 January 2012.
The consensus seems to be that we have put enough CO2 into the air to postpone the next glacial period, perhaps for the next 100,000 years. And this paper:
• Gary Schaffer, Long time management of fossil fuel resources to limit global warming and avoid ice age onsets, Geophys. Res. Lett. 36 (2009), L03704.
suggests that we if we save our remaining fossil fuels, we could head off the next few glacial cycles by burning them at appropriately chosen times. Another possibility would be to deliberately release more potent greenhouse gases than CO2 whenever a glacial period was imminent.
If we were smart, we might be able to manage the Earth’s temperature for quite a while, avoiding glacial periods without turning up the heat full blast as we’re doing now.
For further information, see:
Besides serving an educational function, such programs also have applications within research: (1) they can be used to perform computational experiments in the pursuit of purely theoretical questions, and (2) they can be used to test empirical theories, by generating their predictions and comparing them to observed and historical data.
For an example of concept exploration, suppose we asked how the effectiveness of a forcing function depends on its frequency. With the current program, we could vary the frequency parameter, and observe how it affects the output. On a more systematic basis, we could write a driver program that varies the parameters and applies measures to the output signal.
For an example of theory testing, one could imagine a program that implements a model of the Milankovitch astronomical cycles, then outputs this signal into the state-changing model of a particular theory of climate, and then finally compares the output signal of the climate model with historical data.
Here we see a nice example of scientific programming, in the service of our understanding of the Earth. It is a natural part of the Azimuth project, whose North Star is the idea of science that really matters for human survival.
Our scientific program consists of seven functions. The top-level function is initCharts. It dispatches to initControls, which builds the sliders, and initSrBoard, which builds the curve objects for the forcing function and the output signal (called “position curve” in the program). Each curve object is assigned a function that computes the (x,t) values for the time series, which gets called whenever the input parameters change. The function assigned to the forcing curve, which computes the sine wave, reads the amplitude and frequency values from the sliders.
The calculation method for the output signal is set to the function mkSrPlot, which performs the simulation. It begins by defining a function for the deterministic part of the derivative:
deriv = Deriv(t,x) = SineCurve(t) + BiStable(x),
Then it constructs a “stepper” function, through the call Euler(deriv, tStep). In general, a stepper function maps the current point (t,x) and a noise sample to the next point (t’,x’). The Euler stepper maps ((t,x), noiseSample) to (t + tStep, x + tStep * Deriv(t,x) + noiseSample).
The simulation loop is then performed by the function sdeLoop, which is given:
• The stepper function
• The noise amplitude (“dither”)
• The initial point (t0,x0)
• A randomization offset
• The the number of points to generate
The current point is initialized to (t0,x0), and then the stepper is repeatedly applied to the current point and the current noise sample. The output returned is the sequence of (t,x) values.
The noise samples are read from an array and scaled by the noise amplitude. The array contains many more values than are needed. By changing the initial offset into this array, different instances of the process are obtained.
Now that we’ve tried out the program, downloaded its source code, and understood how it works, it’s time to roll up our sleeves and start tweaking it to do new things!
First we’ll make a local copy of the program on your local machine, and get it to run there. Open the html file, and paste its contents into a file, say c:/pkg/stochasticResonance.html. Then paste the source code, and normals.js.
Now point your browser to the file, to make sure that you see the contents, and the model runs. But To how prove can that you’re really executing the local copy, make a minor edit to the html text, and check that it shows up when you reload the page. Then make a minor edit to StochasticResonanceEuler.js, say by changing the label text on the slider from “forcing function” to “forcing signal.”prove that you’re really executing this local copy? Make some minor edit to the html text, and make sure that shows up when you reload the page. Next, make a minor edit to StochasticResonanceEuler.js. For example, change the label text on the slider from “forcing function” to “forcing signal.”
Here are some program-modification tasks, beginning with the simplest.
• Change the color of the sine wave.
• The Change the exponent in the bistable polynomial is fixed at 2. Change it to values other values, than and 2, to see how that this affects that the output output. signal.
• Add an integer-valued slider, to control this exponent.
• The current code just shows one run of the random process. Modify it the program to perform two runs, runs of the process, and show the two output signals in different colors.
• Modify it to perform ten runs, and, using the current output format, display the average of the ten signals.
• Add an input slider to control the number of runs.
• Add another plot, which shows the standard deviation of the output signals, at each point in time.
Here • are Replace a the couple precomputed array of systems-related normal items: random variates with a runtime computation that uses a random number generator. Use the Sample-Path slider to seed the random number generator.
• You When may run out of random numbers in the precomputed sliders array. are If moved, you’re explain feeling the really flow lazy, of and events you think that QA causes won’t notice, you could just wrap around the array. recalculation Or you could skip the precomputed array, and use a library function for random numbers. It would be nice to maintain take repeatability, place. so investigate ways to reseed the random number generator. Think about replacing the Sample-Path slider with an input box for the randomization seed, and adding a button called reseed. Disclosure: I have been lazy, and have not explored any of this.
• I When have created the sliders following are moved, an event must be fired, which causes the recalculation to take place. Investigate how this mechanism works in our javascript + JSXGraph application, and report back to the blog.wiki page for people wish to share any of their code, and notes, related to this article. Just create a section with your name.
Now let’s use this software to study how the effectiveness of signal transmission depends on the frequency of the forcing signal.
What is the impact of the frequency of the forcing signal on its transmission through stochastic resonance?
• Make a hypothesis about the relationship.
• Experiment Check with Frequency parameter, to refine your hypothesis. hypothesis by varying the Frequency slider.
• Write a function to measure how much power there in the strength of the output signal at the frequency of the forcing sine frequency. wave. Rather than invoking a whole signal processing library, we can just hack something together for this purpose. Let wave1 sinwave = be the time series formed by discretely sampling a discretely sampled sine wave, wave of unit amplitude, at the frequency of the forcing wave, frequency, and let coswave wave2 be = the time series formed by sampling a discretely sampled cosine wave. Let dot1 sindot = the sum dot product (sum product) of wave1 sinwave and the output signal, and similarly for dot2. cosdot. Then your the power measure is dot1 sindot * ^ dot1 2 + dot2 cosdot * ^ dot2. 2.
• Now Modify design a restructuring of the code program to support perform general, N statistical trials studies at each frequency over some specified range of frequency, and measure the output average time power series over from all repeated the runs. N trials. Plot the dependence of power on frequency.
• Refactor the program, and apply it to our study at hand.
We need to start planning for some new models to program. Let’s work out specs for some manageable units of work, which will teach the users – and the programmers, in the process of developing the code – about various concepts that are relevant to climate modelling (such as stochastic resonance). To be clear, we arenot talking about implementation of full-scale, complex climate models. On the other hand, if someone can spec out the structure of an elementary sort of climate model, that would be truly appreciated.
• If you are a climate scientist, we welcome your suggestions for some next, incremental, steps.
• Consider Begin the notion of a hierarchy of models, ranging from the most simple to study the this progressively book more concrete. It has been pointed out by —- that climatologists are at a disadvantage compared to biologists, because nature has provided biologists with an existing hierarchy of models – ranging from the simplest single-cell organisms, to the worms, etc. The simpler models give some mental preparation for the more complex ones. But with climate models, we need to create the simpler models artificially.Gerald North’s book, or this Textbook that I posted about recently. Think of how to write programs to illustrate some aspects of the models. Simplify! (The hierarchy of models.)
But we the programmers may now need to roll up our sleeves, and start learning elements of this science. Begin to study this book Gerald North’s book, or this Textbook. Think of how to write programs to illustrate some aspects of the models. Simplify!
Then let’s talk about it! Pick a topic in one of these sources, see how far you get with it, and report back what you learned, and raise questions about the parts the you didn’t understand. It’s fine to carry on the discussion here, but we’ve found the Forum, with its really nice interface for managing discussion threads, to be a more productive and organized format. Browse around; here are some sample threads. We would love to have anyone here who is interested in contributing to the planning discussion, or in writing code, (or in using it) join in the forum. Just send a request email to John Baez for a login, explaining whatever your interest may be – no earnest request will be denied!
Note also, that in this educational pursuit, aiming for software that is relevant to climate science, we take a broad view of what is relevant. There is, for example, a lot of probability theory and stochastic processes that underlie the modelling of reality, and so software to educate about these concepts is also part of the Azimuth Code Project.
Documentation and blogging – pick a program.
All models that you post here will be considered as candidates for the Azimuth Code Project page. This may be a way for programmers, ultimately, to give back to the Earth.
We’re just beginning to organize for a new round of development. Programmers, we would welcome your participation, on many levels:
• Discussion and planning on the forum
• Writing specs for new modelling programs
• Coding
• Testing
• Documentation and blogging
Check out the forum. If you’re interested in participating, just send a request to John Baez for a login. No earnest request will be denied!
(TODO: scale this back just a bit. We’re more in a brainstorming phase, so e.g. there is no testing to be done.)