Showing changes from revision #5 to #6:
Added | Removed | Changed
This page is a blog article in progress, written by John Baez and Jacob Biamonte. To see discussions of this article while it was being written, visit the Azimuth Forum.
joint with Jacob Biamonte
It’s time to resume the network theory series! It’s been a long time, so we’ll have to remind you of some things. Last time we started looking at a simple example: a diatomic gas.
Two atoms can recombine to form a diatomic molecule:
and conversely, a diatomic molecule can break apart into two atoms:
We can draw both these reactions using a Petri net:
where we’re writing $B$ instead of $A_2$ to abstract away some detail that’s just distracting here. Or, equivalently, we can use a chemical reaction network:
Last time we looked at the rate equation for this chemical reaction network, and found equilibrium solutions of that equation. Now let’s look at the master equation, and find equilibrium solutions of that. This will serve as a review of three big theorems.
We’ll start from scratch, in case you’re just tuning in. The master equation is all about how atoms or molecules or rabbits or wolves or other things interact randomly and turn into other things. So, let’s write write $\psi_{m,n}$ for the probability that we have $m$ atoms of $A$ and $n$ molecule of $B$ in our container. These probabilities are functions of time, and master equation will say how they change.
First we need to pick a rate constant for each reaction. Let’s say the rate constant for this reaction is some number $\alpha > 0$:
while the rate constant for this reaction is some number $\beta > 0$:\beta &bt; 0
Then the master equation says
Yuck!
Normally we don’t show you such nasty equations. Indeed the whole point of this series is to show that by packaging the equations in a better way, we can understand them using high-level concepts instead of mucking around with millions of scribbled symbols. But we thought we’d show you the engine under the hood, just once.
Each term has a meaning. For example,
means that the reaction $A + A \to B$ will tend to increase the probability of there being $m$ $A$ atoms and $n$ $B$ molecules if we start with 2 more $A$‘s and 1 fewer $B$’s. This reaction can happen in $(m+2)(m+1)$ ways if we start with $m+2$ atoms of $A$. And it happens at a probabilistic rate proportional to the rate constant for this reaction, $\alpha$.
We won’t go through the rest of the terms. It’s a good exercise to do so, but there could easily be a typo in the formula, since it’s so long and messy. So let us know if you find one!
To simplify this mess, the key trick is to introduce a generating function that summarizes all the probabilities in a single power series:
It’s a power series in two variables, $z_1$ and $z_2$, since we have two chemical species: $A$‘s, and $B$’s.
Using this trick, the master equation looks like
where the Hamiltonian $H$ is a sum of terms, one for each reaction. This Hamiltonian is built from operators that annihilate and create $A$‘s and $B$’s. The annihilation and creation operators for $A$ atoms are:
The annihilation operator differentiates our power series with respect to the variable $z_1$. The creation operator multiplies it by that variable. Similarly, the annihilation and creation operators for $B$ molecules are:
In Part 8 we explained a recipe that lets us stare at our chemical reaction network and write down this Hamiltonian:
As promised, there’s one term for each reaction. But each term is itself a sum of two: one that increases the probability that our container of chemicals will be in a new state, and another that decreases the probability that it’s in its original state. We get a total of four terms, which correspond to the four terms in our previous way of writing the master equation.
Puzzle:Puzzle: Show that this new way of writing the master equation is equivalent to the previous one.
Now we will look for all equilibrium solutions of the master equation: in other words, solutions that don’t change with time. So, we’re trying to solve
Given the rather complicated form of the Hamiltonian, this seems tough. The challenge looks more concrete but perhaps even more scary if we go back to our original formulation. We’re looking for probabilities $\psi_{m,n}$, nonnegative numbers that sum to one, such that
for all $m,n \ge 0$.
If you hadn’t been reading all these notes up to now, about the only good news you’d instantly see is that these equations are linear , so a linear combination of solutions is again a solution. This lets us simplify the problem using a conserved quantity.
Clearly, there’s a quantity that the reactions here don’t change:
What’s that? It’s the number of $B$‘s plus twice the number of $A$’s. When one $B$ is born, two $A$’s go away. And when two $A$’s are born, one $B$ goes away.
(Of course the secret reason is that a $B$ is a diatomic molecule made of two $A$‘s. But you’d be able to follow the logic here even if you didn’t know that, just by looking at the chemical reaction network… and sometimes this more abstract approach is handy! Indeed, the way chemists first discovered that certain molecules are made of certain atoms is by seeing which reactions were possible.)
Suppose we start in a situation where we know for sure that the number of $B$‘s plus twice the number of $A$’s equals some number $k$:
Then we know $\Psi$ is initially of the form
But since the number of $B$‘s plus twice the number of $A$’s is conserved, if $\Psi$ obeys the master equation it will continue to be of this form.
Put a fancier way, we know that if a solution of the master equation starts in this linear subspace of the space of formal power series:
it will remain in that subspace. So, because the master equation is linear, we can take any solution $\Psi$ and write it as a linear combination of solutions $\Psi_k$, one in each subspace $L_k$ (where $k = 0,1,2,\dots$).
In particular, we can do this for an equilibrium solution $\Psi$. And then all the solutions $\Psi_k$ are also equilibrium solutions—since if one of them changed with time, $\Psi$ would too.
This means we can just look for equilibrium solutions in the subspaces $L_k$. If we find these, we can get all equilibrium solutions by taking linear combinations. And that’s what we’ll do.
But we can formalize this idea a bit by noting it’s a special case of Noether’s Theorem for Markov processes. We saw this theorem in Part 11, and later some of us wrote a paper about it, which goes further:
• John Baez and Brendan Fong, Noether’s theorem for Markov processes.
For the reaction we’re looking at now, the idea is that the subspaces $L_k$ are eigenspaces of an operator that commutes with the Hamiltonian $H$. It follows from standard math that a solution of the master equation that starts in one of these subspaces, stays in that subspace.
Here we will use the Anderson-Craciun-Kurtz theorem to work out the corresponding equilibrium states of the master equation. Brendon proved this in relation to what we consider here back in Part X.
Let
Then
and for $\Psi \neq 0$
which vanishes for
Now we will show how Noether’s theorem relates the conserved quantity $2N_A + N_B$ to a symmetry.
The Hamiltonians that arise in Petri net field theory have a very particular general form. Not every Hamiltonian in this vast class preserves particle number (since we can have exponential growth or decay for instance). What we want to do is to find a good way to characterize those Hamiltonians that do preserve particle number. We want to understand symmetries in general. Those of you following the posts will recall the commutation relations from Part X. These are going to be relevant here too.
Just a reminder, the number operator for a single species is
and the number operator for all the species is a sum over the single species
We will derive a few results at the end of the post. If you think we are telling the truth, you don’t need to check them, but they are there if you want to be bored with these sort of details. To get you into the mood…
It can be shown (using induction) that
Now for some results.
This supporting lemma can be used to prove a range of things related to symmetries in the very type of Hamiltonians we are considering here.
vanishes identically iff $H$ preserves particle number.
So to check if the total number of particles are conserved during evolution under some Hamiltonian, all one has to do is check Theorem I. The Hamiltonian we consider here does not conserve total particle number. However, the reversible reaction John did in Part 10 did.
(Exercise). In Network Theory Part 10 John considered the reversible reaction with Hamiltonian
Use Theorem I to show that this reversible reaction conserves particle number.
(Theorem II — particle conservation symmetry). Given a Hamiltonian acting on $k$ particle species, there exists $k$ positive integer choices for $\omega_n$ which causes the following to vanish identically
iff $H$ has a particle conservation symmetry.
As will soon be seen, this is precisely the case here. In other words, there exists $\omega_1$ and $\omega_2$ that take positive integer values which case the above quantity to vanish.
For our system to have a particle conservation symmetry, we must show that
This vanishes since, from Lemma I, we calculate that
Here, Theorem II applies, are we are able to find two values, $\omega_1=1$ and $\omega_2=2$ that cause the commutator to vanish. The Hamiltonian therefore has a particle number conservation symmetry.
Now for the master equation approach. Our Hamiltonian is given as
and the evolution operator at time $t$ is given as
At each order in $k$, we have a term that corresponds to the Hamiltonian $H$ acting $k$ times. One can think of these as alternative histories. In quantum mechanics, there is a spooky thing called coherence, where each of these histories seems to occur concurrently. In stochastic mechanics, each history only occurs with some probability. In terms of mathematical structure, the theories become closely related. While the semantic interpretation might differ, the syntactical form, given by a sum over histories unites quantum and stochastic mechanics. This enables us to e.g. apply tools from quantum mechanics to stochastic mechanics such as Feynman diagrams.
Proof of Theorems I and II. Sometimes a long calculation can simplify matters. This is the case here, though we don’t want to muddy the water from what’s going on here, as this is just some algebra.
Here we will use the following notation.
where the vector $m'(\tau)$ has its $i$th component zero which is why we are able to move term(s) $a^{m'(\tau)}$ to the right. This enables us to express the more general commutation relations,
Using these relations, it follows that (is better notation possible?)
and also
These will simplify the calculation of the commutation of the Hamiltonian and the number operator.
For this to vanish, the following quantity must vanish identically.
where
Both of the theorems then follow from applications of the above.
Particle Conservation of the simple reversible reaction. In Network Theory Part 10 John considered the reversible reaction with Hamiltonian
We find its commutation relations with the creation (destruction) operators of both species to be
Now we see that the particle number is conserved for this Hamiltonian by calculating
This could have also been shown, using Theorem I directly.
We’ve been working hard to understand the parallels and differences between quantum and stochastic mechanics. Last time we showed how the methods developed in prior posts can be used to model chemical reaction networks. This time, we are going to report the details of a battle.
This is the same quantum vs. stochastic battle we’ve talked about in prior posts, but this time we are going talk in detail about the odd nature of eigenstates in quantum mechanics, and how we can’t expect this structure in stochastic mechanics.