Blog - Petri net programming (part 4)

This page is a blog article in progress, written by David Tanzer. To discuss this article while it’s being written, visit the Azimuth Forum. Please remember that blog articles need HTML, not Markdown.

*guest post by David Tanzer*</i>

We’re heading towards writing a simulator for a continuous deterministic Petri net, but lots of concepts are clamoring for attention along the way! In the first place, we need to understand the dynamic principle of these networks, which is captured by the “rate equation.” This is a differential equation, for which we did some preparatory work in the *last article*.

In a prior article, we introduced the basic model of stochastic Petri nets, through which discrete tokens flow, and which have transitions that fire according to a probabilistic model. Now we are starting to talk about continuous Petri nets, where the moving tokens are essentially treated as a fluid substance. These models are e

How do we pass from the one model to the other?

But before that, in this article I will address, in broad terms, the relationship between continuous Petri nets and the discrete random Petri nets which we described in a *prior article*.

In a word, the continuous model is derived from a limiting case of the discrete model, as populations get large, and that the determinism results from the “macroscopic” averaging of the mass of “microscopic” discrete random events. So the topics are well connected.

The model of a continuous, deterministic Petri net is actually derived from the more fundamental model of a discrete, stochastic Petri net, which we introduced in a previous blog article. Think of chemical reactions, for example. To a good approximation, we can view the amounts of the various substances as continuously varying magnitudes (and then view them as real-valued *concentrations* of particles per unit volume), and the processes as continually occurring, at various concentration-dependent rates. But in reality, at a detailed level, there is actually a discrete, integer valued number of each of the particles, and the reaction events take place by separate, discrete, random events.

In a stochastic process, different “runs” of the experiment will produce different sequences of events, and the collection of all possible sequences will be distributed according to a probability distribution. Each sequence of events will lead to a corresponding sequence of states of the system. At every point in time, therefore, there will be a distribution of possible states of the system. We can then take the expected value of the state at each point int time, and form the *sequence of expected states* for the stochastic process.

So, whereas the stochastic process consists of a non-deterministic collection of possible outcomes, all involving discrete integer states (in a stochastic Petri net, discrete population counts), the sequence of expected states is uniquely determined – a deterministic process – involving states with real-valued components – which result from taking the means of the integer-valued components.

Furthermore, under certain statistical assumptions (cite John’s recent paper), as the population counts get large, and the reaction rates increase, a law of large number will kick in, and the variations of the runs of the stochastic process around the mean will decrease (will cluster more tightly around the mean), so that the deterministic sequence of expected states will become a better and better approximation for the observed runs of the stochastic process.

Let’s illustrate with a physical example. Consider an apparatus where we have a winding spiral pipe, with rough and irregular inner walls. At the bottom it opens into a large jar, and at the top we pour a cup containing small pebbles. Each pebble eventually (bounces) makes its way from the top of the pipe to the bottom and into the jar, but due to the irregularities of friction, etc., there may be a substantial variation in the time that each pebble takes to traverse the pipe.

The events are the arrivals of the pebbles in the jar, and the state at each point we will take to be the number of pebbles in the jar. If we graph the state (the pebble count) as a function of time, it will be a non-decreasing function, with integer values, and integer jumps as pebbles enter the jar. But the *expected* number of pebbles in the jar, as a function of time, will be an increasing, continuous function.

(We suppose the pipe is wide in comparison to the pebble sizes, so there are no clogging or queueing effects.)

Now consider what happens if, say, we multiply by ten the number of pebbles that we release. The results of the process will still be stochastic, but the overall process, as recorded by the rising count of pebbles in the jar, will be smoother, and closer to the averaged out sequence of expected pebble counts.

In fact, by increasing the pebble counts within each run of the experiment, we are performing a kind of averaging there: the effects of the slower pebbles will be offset by the effects of the faster pebbles – the count in the jar reflects the aggregate effects of all the pebbles.

In the limiting process of a large number of pebbles, the behavior looks more like pouring sand into the pipe, which begins to look like a continuous fluid. The continuous fluid model is an even better approximation when we reach the scale of chemical reactions, which involve moles of chemical entities. As we pass to the theoretical limit, involving an infinite number of infinitesimal particles, we reach the fluid model for a continuous, deterministic process.

category: blog