Blog - hierarchical organization and biological evolution (part 1)

This page is a blog article in progress, written by Cameron Smith. To discuss this article while it’s being written, visit the Azimuth Forum.

An attempt to review some of the literature on major transitions in evolution and multi-level selection, sketch a few connections to concepts in category theory, and discuss the potential for using experimental evolution to investigate and strengthen those connections.

Edit :: Source :: Part 2 :: Part 3

My thesis has been that one path to the construction of a non-trivial theory of complex systems is by way of a theory of hierarchy. Empirically, a large proportion of the complex systems we observe in nature exhibit hierarchic structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity.- Herbert Simon, 1962

(Many of the concepts that) have dominated scientific thinking for three hundred years, are based upon the understanding that at smaller and smaller scales—both in space and in time—physical systems become simple, smooth and without detail. A more careful articulation of these ideas would note that the fine scale structure of planets, materials and atoms is not without detail. However, for many problems, such detail becomes irrelevant at the larger scale. Since the details (become) irrelevant (at such larger scales), formulating theories in a way that assumes that the detail does not exist yields the same results as (theories that do not make this assumption).- Yaneer Bar-Yam

Thoughts like these lead me to believe that, as a whole, we humans need to reassess some of our approaches to understanding. I’m not opposed to reductionism, but I think it would be useful to try to characterize those situations that might require something more than an exclusively reductionist approach. One way to do that is to break down some barriers that we’ve constructed between *disciplines*. So I’m here on Azimuth trying to help out this process.

Indeed, Azimuth is just one of many endeavors people are beginning to work on that might just lead to the unification of humanity into a superorganism. Regardless of the external reality, a fear of climate change could have a unifying effect. And, if we humans are simply a set of constituents of the superorganism that is Earth’s biosphere, it appears we are its only candidate germ line. So, assuming we’d like our descendants to have a chance at existence in the universe, we need to figure out either how to keep this superorganism alive or help it reproduce.

We each have to recognize our own individual limitations of time, commitment, and brainpower. So, I’m trying to limit my work to the study of biological evolution rather than conjuring up a ‘pet theory of everything’. However, I’m also trying not to let those disciplinary and institutional barriers limit the tools I find valuable, or the people I interact with. So, the more I’ve thought about the complexity (let’s just let ‘complexity’ = ‘anything humans don’t yet understand’ for now) of evolution, the more I’ve been driven to search for new languages. And in that search, I’ve been driven toward pure mathematics, where there are many exciting languages lurking around. Perhaps one of these languages has already obviated the need to invent new ideas to understand biological evolution… or perhaps an altogether new language needs to be constructed.

The prospects of a general theory of evolution point to the same intellectual challenge that we see in the quote above from Bar-Yam: assuming we’d like to be able to consistently manipulate the universe, when can we neglect *details* and when can’t we?

Consider the *level of organization* concept. Since different details of a system can be effectively ignored at different scales, our scientific theories have themselves become ‘stratified’:

• G. L. Farre, The energetic structure of observation: a philosophical disquisition, *American Behavioral Scientist* **40** (May 1997), 717-728.

In other words, science tends to be organized in ‘layers’. These layers have come to be conceived of as levels of organization, and each scientific theory tends to address only one of these levels (click the image to see the flash animation that ascends through many scales or levels):

It might be useful to work explicitly on connecting theories that tell us about particular levels of organization in order to attempt to develop some theories that *transcend* levels of organization. One type of insight that could be gained from this approach is an understanding of the mutual development of bottom-up *ostensibly mechanistic* models of simple systems and top-down *initially phenomenological* models of complex ones.

Simon has written an interesting discussion of the quasi-continuum that ranges from simple systems to complex ones:

• H. A. Simon, The architecture of complexity, *Proceedings of the American Philosophical Society* **106** (1962), 467–482.

But if we take an ideological perspective on science that says “let’s unify everything!” (scientific monism), a significant challenge is the development of a language able to unify our descriptions of simple and complex systems. Such a language might help communication among scientists who work with complex systems that apparently involve multiple levels of organization. Something like category theory may provide the nucleus of the framework necessary to formally address this challenge. But, in order to head in that direction, I’ll try out a few examples in a series of posts, albeit from the somewhat limited perspective of a biologist, from which some patterns might begin to surface.

In this introductory post, I’ll try to set a basis for thinking about this tension between simple and complex systems without wading through any treatises on ‘complexity’. It will be remarkably imprecise, but I’ll try to describe the ways in which I think it provides a useful metaphor for thinking about how we humans have dealt with this simple ↔ complex tension in science. Another tack that I think could accomplish a similar goal, perhaps in a clearer way, would be to discuss fractals, power laws and maybe even renormalization. I might try that out in a later post if I get a little help from my new Azimuth friends, but I don’t think I’m qualified yet to do it alone.

What is the organizational structure of the products of evolutionary processes? Herbert Simon provides a perspective that I find intuitive in his parable of two watchmakers.

He argues that the systems containing modules that don’t instantaneously fall apart (‘stable intermediates’) and can be assembled hierarchically take less time to evolve complexity than systems that lack stable intermediates. Given a particular set of internal and environmental constraints that can only be satisfied by some relatively complex system, a hierarchically organized one will be capable of meeting those constraints with the fewest resources and in the least time (i.e. most efficiently). The constraints any system is subject to determine the types of structures that can form. If *hierarchical* organization is an unavoidable outcome of evolutionary processes, it should be possible to characterize the causes that lead to its emergence.

Simon describes a property that some complex systems have in common, which he refers to as ‘near decomposability’:

• H. A. Simon, Near decomposability and the speed of evolution, *Industrial and Corporate Change* **11** (June 2002), 587-599.

A system is **nearly decomposable** if it’s made of parts that interact rather weakly with each other; these parts in turn being made of smaller parts with the same property.

For example, suppose we have a system modelled by a first-order linear differential equation. To be concrete, consider the fictitious building imagined by Simon: the Mellon Institute, with 12 rooms. Suppose the temperature of the $i$th room at time $t$ is $T_i(t)$. Of course most real systems seem to be nonlinear, but for the sake of this metaphor we can imagine that the temperatures of these rooms interact in a linear way, like this:

$\frac{d}{d t}T_i(t) = \sum_{j}a_{ij}\left(T_{j}(t)-T_{i}(t)\right),$

where $a_{ij}$ are some numbers. Suppose also that the matrix $a_{ij}$ looks like this:

For the sake of the metaphor I’m trudging through here, let’s also assume

$a\gg\epsilon_l\gg\epsilon_2$

Then our system is nearly decomposable. Why? It has three ‘layers’, with two cells at the top level, each divided into two subcells, and each of these subdivided into three sub-subcells. The numbers of the rows and columns designate the cells, cells 1–6 and 7–12 constitute the two top-level subsystems, cells 1–3, 4–6, 7–9 and 10–12 the four second-level sub- systems. The interactions within the latter subsystems have intensity $a$, those within the former two subsystems, intensity $\epsilon_l$, and those between components of the largest subsystems, intensity $\epsilon_2$ (Simon, 2002). This is why Simon states that this matrix is in **near-diagonal form**. Another, probably more common, terminology for this would be **near block diagonal form**. This terminology is a bit sloppy, but it basically means that we have a square matrix whose diagonal entries are square matrices and all other elements are *approximately* zero. That ‘approximately’ there is what differentiates *near block diagonal matrices* from honest block diagonal matrices whose off diagonal matrix elements are precisely zero.

This is a trivial system, but it illustrates that the near decomposability of the coefficient matrix allows these equations to be solved in a *near* hierarchical fashion. As an approximation, rather than simulating all the equations at once (e.g. all twelve in this example) one can take a recursive approach and solve the four systems of three equations (each of the blocks containing $a$s), and average the results to produce initial conditions for two systems of two equations with coefficients:

$\begin{bmatrix}
\epsilon_1 & \epsilon_1 & \epsilon_2 & \epsilon_2 \\
\epsilon_1 & \epsilon_1 & \epsilon_2 & \epsilon_2\\
\epsilon_2 & \epsilon_2 & \epsilon_1 & \epsilon_1 \\
\epsilon_2 & \epsilon_2 & \epsilon_1 & \epsilon_1
\end{bmatrix},$

and then average those results to produce initial conditions for a single system of two equations with coefficients:

$\begin{bmatrix}
\epsilon_2 & \epsilon_2 \\
\epsilon_2 & \epsilon_2
\end{bmatrix}.$

This example of simplification indicates that the study of a nearly decomposable systems system can be reduced to a series of smaller modules, which can be simulated in less computational time, if the error introduced in this approximation is tolerable. The degree to which this method saves time depends on the relationship between the size of the whole system and the size and number of hierarchical levels. However, as an example, given that the time complexity for matrix inversion (i.e. solving a system of linear equations) is $O(n^2)$, then the hierarchical decomposition would lead to an algorithm with time complexity $O\left(\left(\frac{n}{L}\right)^2\right)$, where $L$ is the number of levels in the decomposition. (For example, $L=4$ in the Mellon Institute, assuming the individual rooms are the lowest level).

All of this deserves to be made much more precise. However, there are some potential metaphorical consequences for the evolution of complex systems:

If we begin with a population of systems of comparable complexity, some of which are nearly decomposable and some of which are not, the nearly decomposable systems will, on average, increase their fitness through evolutionary processes much faster than the remaining systems, and will soon come to dominate the entire population. Notice that the claim is not that more complex systems will evolve more rapidly than less complex systems, but that, at any level of complexity, nearly decomposable systems will evolve much faster than systems of comparable complexity that are not nearly decomposable.([Simon, 2002](#Simon2002))

The point I’d like to make is that in this system, the idea of switching back and forth between simple and complex perspectives is made explicit: we get a sort of conceptual parallax:

In this simple case, the approximation that Simon suggests works well; however, for some other systems, it wouldn’t work at all. If we aren’t careful, we might even become victims of the Dunning-Kruger effect. In other words: if we don’t understand a system well from the start, we may overestimate how well we understand the limitations inherent to the simplifications we employ in studying it.

But if we at least recognize the potential of falling victim to the Dunning-Kruger effect, we can vigilantly guard against it in trying to understand, for example, the currently paradoxical tension between ‘groups’ and ‘individuals’ that lies at the heart of evolutionary theory… and probably also the caricatures of evolution that breed social controversy.

Keeping this in mind, my starting point in the next post in this series will be to provide some examples of hierarchical organization in biological systems. I’ll also set the stage for a discussion of evolution viewed as a dynamic process involving structural and functional transitions in hierarchical organization—or for the physicists out there, something like phase transitions!

T. F. H. Allen and T. B. Starr, Hierarchy: Perspectives for Ecological Complexity. Chicago: University of Chicago Press, 1982, p. 326. $\hookleftarrow$

A. J. Arnold and K. Fristrup, The theory of evolution by natural selection: a hierarchical expansion, Paleobiology, vol. 8, no. 2, pp. 113–129, 1982. $\hookleftarrow$

A. C. Ehresmann and J. P. Vanbremeersch, Memory Evolutive Systems; Hierarchy, Emergence, Cognition, Volume 4 (Studies in Multidisciplinarity). Elsevier Science, 2007, p. 402. $\hookleftarrow$

S. A. Frank, George Price’s contributions to evolutionary genetics., Journal of theoretical biology, vol. 175, no. 3, pp. 373-88, Aug. 1995. $\hookleftarrow^1$ $\hookleftarrow^2$

S. A. Frank, Foundations of social evolution. Princeton Univ Press, 1998. $\hookleftarrow$

S. Okasha, Evolution and the levels of selection. New York: Oxford University Press, USA, 2006. $\hookleftarrow$

G. R. Price, Selection and Covariance, Nature, vol. 227, no. 5257, pp. 520-521, Aug. 1970. $\hookleftarrow$

G. R. Price, Extension of covariance selection mathematics, Annals of Human Genetics, vol. 35, no. 4, pp. 485-490, Apr. 1972. $\hookleftarrow$

G. R. Price, The nature of selection, Journal of Theoretical Biology, vol. 175, no. 3, pp. 389-396, Aug. 1995. (written ca. 1971 and published posthumously) $\hookleftarrow$

J. Maynard Smith and E. Szathmáry, The major transitions in evolution. New York: Oxford University Press, USA, 1995. $\hookleftarrow$