This page is a blog article in progress, written by David Tanzer. To see discussions of this article while it was being written, visit the Azimuth Forum. Please remember that blog articles need HTML, not Markdown.
guest post by David Tanzer</i>
Today let’s look into some of the software that has been created in the Azimuth code project, what it accomplishes, and how it works. There are a number of models that have been created there, see the Azimuth Code Project page. These are interactive models that run right in your web browser, so they have a very responsive behavior.
Let’s pick one to focus on: the stochastic resonance model, which was the first one developed here, by Allan Erskine and Jim Stuttard. It will serve to illustrate the programming language and software library support – javascript language and the foo graphics library – that are used by the rest of the models.
First let’s give it a test run. Let’s go to the page, and see what’s there. There is a graph that shows two functions over time, and four sliders that control the functions that are generated. One of the functions, which is shown in green, is a sine wave, whose amplitude and frequency are controlled by two of the sliders. Experiment by changing the amplitude and frequency sliders, to verify that the sine wave changes accordingly.
Now at the heart of the model here is a random process, which is used to generate the second curve. Experiment with the noise slider, which controls the amount of randomness that is injected into the process. See that when you set it to zero, the curve is smooth, and it becomes increasingly chaotic as the noise parameter increases. The sine wave is fed as a “driving input” to the model, so fiddle around with the amplitude and frequency controls, to see how that affects the output of the model. Finally, the seed parameter controls the sequencing of the underlying random number generator, so setting it to different values gives you different instances of the random process.
This code is implementing a discrete simulation for a “stochastic differential equation” (SDE), which is an equation that specifies the derivative of a function in terms of time, the current value of the function, and a noise process. If the noise process is zero, you get an ordinary differential equation, in which the derivative is given by a deterministic function of time and the current value of the function.
Structurally speaking, this program consists of: (1) a general SDE simulator, the formula for the specific SDE used in this model, (3) functions used by this specific SDE formula, (4) graphical controls used to set the parameters used by these functions, (6) graphical output of the generated time series, which include both the final output time series, and the intermediate values used in the computation.
This program could be naturally evolved into a simulator for another SDE, by changing the formula, the intermediate functions, and the associated slider parameters (hint: homework problem to come). Here, the specific components are the SDE derivative formula, the sine wave, which used in that formula, and the sliders for the sine wave parameters.
Before proceeding to describe the algoroithms used in this model, let’s take a commercial break, for the following purpose: to see where the rubber meets the road! I would like everyone now to try to open up the source code, and at least skim through it. Since the code is running in your browser, you have already downloaded it! You just have to find the view-source function on your browser, in order to look at it. Note also that the code is only a few pages long, and it is written a nice, elegant mathematical style.
Here are the steps that I went through to get to the code.
Open the web page for the model.
Run your browser’s view-source function. This is browser specific. At home I’m running Firefox on the Mac, and for some peculiar reason this function is not appearing in the menu. But the classic Google search is 99.9% reliable in answering these problems: type Apple-U to open the view source window. How intuitive, how accessible! (TODO: how to do this with other browsers / environments)
Then the window opens up, and some conventional html gibberish shows up. But after looking at it for a few seconds, it doesn’t appear as bad as it could be. The text of the web page, along with its paragraph breaks, is clearly present there. It clearly looks like it was formatted by a human being who gave some thought to it, rather than by a robot who doesn’t know how to type carriage return, or that indentation is not the same thing as random sequences of spaces and tab characters. Finally, we can express deep gratitude to the authors for keeping it short and sweet, as the whole thing fits comfortably on one page of the screen.
But where’s the code? By way of elimination, and by the suspicious nature of the specific lexical gibberish, we conclude that the only possible culprits could be these lines at the head:
<script src='http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default'></script> <script src='http://cdnjs.cloudflare.com/ajax/libs/jsxgraph/0.93/jsxgraphcore.js'></script> <script src='./StochasticResonanceEuler.js'></script> <script src='./normals.js'></script>
Well, we overheard that these models were coded in javascript, and then we see here that each of these four lines contains a path to a files that ends in “.js”. One of them has a Bingo sound to it, StochasticResonanceEuler.js.
The first two files are referenced explicitly at the websites http://cdn.mathjax.org, and http://cdnjs.cloudflare.com, respectively. There all about some kind of “jax” stuff, which sounds like another planet that is quite independent of the mathematical models of stochastic resonance and differential equations that we were originally so cleanly focussed on. That, plus the fact that one of the paths contains the string “libs/jsxgraph/0.93” leads us to infer that these two files provide language and graphics support for the application.
Next, notice that the links to these four files are highlighted in the view source window, as clickable links. So, if they say “click me,” do we dare? Ok, let’s start by clicking on the link for StochasticResonanceEuler.js. As I said, Bingo! There we see the main code for the application, which is reasonably compact and doesn’t have the ring of badly hacked material.
Now that we’re there, at the source file StochasticResonanceEuler.js, I would like to ask one more thing of you, before we return to our regularly scheduled programmining on the algorithm itself. Please review the succinct, five-point structural description of the code, which I gave in the previous section. Then browse through this source code, and see how many of the elements that I wrote about there you can see being discussed in the code. This will involve putting a blur lens over all kinds of non-obvious implementation-dependent statements in the code, so that you can sniff out what might be going on in the code, and form some rough hypotheses about who is doing what where.
Before proceeding, to avoid any possibly mathematical liability, I must express the following disclaimer, as as side note. Recall that I said that the simulation is driven by a function that gives the “derivative” of the random variable X, as a function of time t and the current value of x. Now, as we know, for ordinary differential equations derivative means instantaneous time rate of change, but for stochastic equations with random variables, the definition of this concept involves many subtleties that are way beyond the scope of this article. But here, because we are using a discrete numerical sample-based approximation to the actual continuous equations, we are fortunately exempted from these deeper concerns: the simulator is just a “stepper” that takes an ostensibly instantaneous time rate of change, and extrapolates this for the entire interval between sample points.
Now, returning to the main theme, we said that the derivative of the main random variable X is specified as a function of time t, the current value x, and a noise term. For stochastic resonance, this Deriv(t,x) is given as the sum of a sinusoidal function of t, called the forcing function, a “bistability” function of x, and a random noise variable:
Deriv(t, x) = SineWave(sine-amplitude, sine-frequency, t) + BiStablility(x) + NoiseSample(noise-amplitude)
If there were no noise term (noise-amplitude = 0), and no BiStability function, the effect of the sinusoidal forcing function would cause the value of X(t) itself to oscillate sinusoidally, in a deterministic fashion.
What makes the model interesting is the bi-stability term:
BiStability(x) = x * (1 - x^2).
Let’s consider what would happen if this were the sole term that defined Deriv(t,x).
First, BiStability has zeros at -1, 0 and 1, so these are the equilibrium points of x. The derivative of BiStability is 1 - 3x^2, which is negative at -1 and 1, and positive at zero, so -1 and 1 are stable equilibria, and 0 is unstable. All the points from -infinity to 0 are in the basin of attraction for -1, and all the points from 0 to infinity are in the basin of attraction for 1.
Now, let’s consider the effect of the sine function, plus the BiStability polynomial, but still without noise. If the amplitude of the forcing function is low enough, then, depending on initial conditions, the solution will converge towards on of the attractors -1 and 1, and oscillate around it according to the forcing function. If the amplitude of the forcing function is large enough, then the forcing function may have enough activity, at the right moment, to pull the function from one basin of attraction to another.
Next, let’s consider the bistability polynomial, plus the forcing function, plus the noise – the whole magilla. Suppose that the forcing function is weak, and the signal is oscillating around an attractor. Then, depending on the noise amplitude, a random noise event may pull the signal from one attractor to another. The transitions between basins, then, will contain both a periodic element, and a randomized element.
Note also, for a given amplitude of the sine wave, a longer period will have a stronger effect on pulling the state from one pole to another – i.e., the oscillations are more sensitive to low frequencies. One could examine this more systematically by analyzing – or measuring through computational examples – the expected values of the frequency components in the spectrum of the output signal, as a function of forcing amplitude, frequency, and noise amplitude. (Homework?)
This is interesting / applied in the study of the Milkanovich cycles of the ice ages.
Now the program logic is packaged up in seven functions, and spans a mere 3.5 pages of printed text – it’s impressive that this platform can produce such nice, interactive applications, with this economy of coding.
The top level entry point in the function initCharts (hint: find it and look at it). This is a short little function, which does its work by making two function calls: initControls, and initSrBoard. Please at least scan through initControls, which, as you can see, is building the objects that represent the sliders, and returning them as a dictionary of slider objects.
All the real, logical content of the application is encapsulated in this second function, initSrBoard, which is at the end of the file. There, we see that two curve objects are constructed, one called positionCurve, and the other called forcingCurve. The forcing curve is constructed in a static (not static!) way, from the locally defined forcingFunction, which just expresses the definition of the sine wave. Note that in this function, the values of the amplitude and frequency sliders are used. (Note that when the sliders are moved, an event must be fired, which causes the recalculation to take place. How is this mechanism implemented in the javascript / JSG application library?)
(Link to JSGGraph page, and discussion.)
(How to make your own version of the app.)
(Brownian motion, for noise process)
Next, and most centrally to the application, a function is attached to the updateDataArray member of the positionCurve object, which makes the call to the stochastic resonance model proper: this is the call to the “mkSrPlot” (i.e. MakeStochasticResonancePlot) function, which takes as arguments the values for all of the sliders: forcing amplitude, forcing frequency, noise amplitude, and seed value. This call returns a “plot” object that contains a list of time values, and a corresponding list of values for the main variable X.
So now we must drill down into the logic of the mkSrPlot function. The first step here is to construct a function object, and store in in the variable deriv, which represents the derivative computation:
Deriv(t,x) = SineCurve + BiStability,
where represents the deterministic component of the derivative.
Then a “stepper” function is constructed, by the call to the function Euler(deriv, tStep), which returns a function that performs “stochastic Euler extrapolation,” with a step size of tStep, using the derivative function deriv that was just constructed. The idea of a stepper function is very simple: it takes as input the current point (t,x), and a noise sample, and returns the next point (t’,x’).
A stepper function of this form is all that is needed for the general toplevel loop, which is implemented by the function sdeLoop, to generate the full time series for the output. This sdeLoop function takes as arguments the stepper function, the “dither value” (amount of noise), the initial point (t0,x0), a seed value for the randomization, and a number of points to be generated. The loop simply initializes a currentPoint to (t0,x0), and then repeatedly applies the stepper function to the current point and the next noise sample; the output returned is just the sequence of (t,x) values that are generated (by this iteration).
The noise samples are generated by taking a contiguous subsequence of the array normals[i], and scaling them by the dither value. The “seeding” of the randomization is implemented by using the seed variable as a control value, which controls which subsequence of the (large) array of normal random variates gets used.
One last point here: How did the Euler function compute a stepper based on what I labeled “stochastic Euler extrapolation?” Well, it’s quite simple: given the deterministic derivative function Deriv(t,x), and step size tStep, it returns the following function:
((t,x), noiseSample) –> (t + tStep, x + tStep * Deriv(t,x) + noiseSample).
That’s the story.
The time series are generated as sequences of (t,x) pairs. Note that x here is the dependent variable, which is the representation in code of a random variable X.