Moran process

Stochastic process used in biology to describe finite populations

A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958.[1] It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection. The process can describe the probabilistic dynamics in a finite population of constant size N in which two alleles A and B are competing for dominance. The two alleles are considered to be true replicators (i.e. entities that make copies of themselves).

In each time step a random individual (which is of either type A or B) is chosen for reproduction and a random individual is chosen for death; thus ensuring that the population size remains constant. To model selection, one type has to have a higher fitness and is thus more likely to be chosen for reproduction. The same individual can be chosen for death and for reproduction in the same step.

Neutral drift

Neutral drift is the idea that a neutral mutation can spread throughout a population, so that eventually the original allele is lost. A neutral mutation does not bring any fitness advantage or disadvantage to its bearer. The simple case of the Moran process can describe this phenomenon.

The Moran process is defined on the state space i = 0, ..., N which count the number of A individuals. Since the number of A individuals can change at most by one at each time step, a transition exists only between state i and state i − 1, i and i + 1. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

P i , i 1 = N i N i N P i , i = 1 P i , i 1 P i , i + 1 P i , i + 1 = i N N i N {\displaystyle {\begin{aligned}P_{i,i-1}&={\frac {N-i}{N}}{\frac {i}{N}}\\P_{i,i}&=1-P_{i,i-1}-P_{i,i+1}\\P_{i,i+1}&={\frac {i}{N}}{\frac {N-i}{N}}\\\end{aligned}}}

The entry P i , j {\displaystyle P_{i,j}} denotes the probability to go from state i to state j. To understand the formulas for the transition probabilities one has to look at the definition of the process which states that always one individual will be chosen for reproduction and one is chosen for death. Once the A individuals have died out, they will never be reintroduced into the population since the process does not model mutations (A cannot be reintroduced into the population once it has died out and vice versa) and thus P 0 , 0 = 1 {\displaystyle P_{0,0}=1} . For the same reason the population of A individuals will always stay N once they have reached that number and taken over the population and thus P N , N = 1 {\displaystyle P_{N,N}=1} . The states 0 and N are called absorbing while the states 1, ..., N − 1 are called transient. The intermediate transition probabilities can be explained by considering the first term to be the probability to choose the individual whose abundance will increase by one and the second term the probability to choose the other type for death. Obviously, if the same type is chosen for reproduction and for death, then the abundance of one type does not change.

Eventually the population will reach one of the absorbing states and then stay there forever. In the transient states, random fluctuations will occur but eventually the population of A will either go extinct or reach fixation. This is one of the most important differences to deterministic processes which cannot model random events. The expected value and the variance of the number of A individuals X(t) at timepoint t can be computed when an initial state X(0) = i is given:

E [ X ( t ) X ( 0 ) = i ] = i Var ( X ( t ) X ( 0 ) = i ) = 2 i N ( 1 i N ) 1 ( 1 2 N 2 ) t 2 N 2 {\displaystyle {\begin{aligned}\operatorname {E} [X(t)\mid X(0)=i]&=i\\\operatorname {Var} (X(t)\mid X(0)=i)&={\tfrac {2i}{N}}\left(1-{\tfrac {i}{N}}\right){\frac {1-\left(1-{\frac {2}{N^{2}}}\right)^{t}}{\frac {2}{N^{2}}}}\end{aligned}}}
For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows. Writing p = i/N,

E [ X ( t ) X ( t 1 ) = i ] = ( i 1 ) P i , i 1 + i P i , i + ( i + 1 ) P i , i + 1 = 2 i p ( 1 p ) + i ( p 2 + ( 1 p ) 2 ) = i . {\displaystyle {\begin{aligned}\operatorname {E} [X(t)\mid X(t-1)=i]&=(i-1)P_{i,i-1}+iP_{i,i}+(i+1)P_{i,i+1}\\&=2ip(1-p)+i(p^{2}+(1-p)^{2})\\&=i.\end{aligned}}}

Writing Y = X ( t ) {\displaystyle Y=X(t)} and Z = X ( t 1 ) {\displaystyle Z=X(t-1)} , and applying the law of total expectation, E [ Y ] = E [ E [ Y Z ] ] = E [ Z ] . {\displaystyle \operatorname {E} [Y]=\operatorname {E} [\operatorname {E} [Y\mid Z]]=\operatorname {E} [Z].} Applying the argument repeatedly gives E [ X ( t ) ] = E [ X ( 0 ) ] , {\displaystyle \operatorname {E} [X(t)]=\operatorname {E} [X(0)],} or E [ X ( t ) X ( 0 ) = i ] = i . {\displaystyle \operatorname {E} [X(t)\mid X(0)=i]=i.}

For the variance the calculation runs as follows. Writing V t = Var ( X ( t ) X ( 0 ) = i ) , {\displaystyle V_{t}=\operatorname {Var} (X(t)\mid X(0)=i),} we have

V 1 = E [ X ( 1 ) 2 X ( 0 ) = i ] E [ X ( 1 ) X ( 0 ) = i ] 2 = ( i 1 ) 2 p ( 1 p ) + i 2 ( p 2 + ( 1 p ) 2 ) + ( i + 1 ) 2 p ( 1 p ) i 2 = 2 p ( 1 p ) {\displaystyle {\begin{aligned}V_{1}&=E\left[X(1)^{2}\mid X(0)=i\right]-\operatorname {E} [X(1)\mid X(0)=i]^{2}\\&=(i-1)^{2}p(1-p)+i^{2}\left(p^{2}+(1-p)^{2}\right)+(i+1)^{2}p(1-p)-i^{2}\\&=2p(1-p)\end{aligned}}}

For all t, ( X ( t ) X ( t 1 ) = i ) {\displaystyle (X(t)\mid X(t-1)=i)} and ( X ( 1 ) X ( 0 ) = i ) {\displaystyle (X(1)\mid X(0)=i)} are identically distributed, so their variances are equal. Writing as before Y = X ( t ) {\displaystyle Y=X(t)} and Z = X ( t 1 ) {\displaystyle Z=X(t-1)} , and applying the law of total variance,

Var ( Y ) = E [ Var ( Y Z ) ] + Var ( E [ Y Z ] ) = E [ ( 2 Z N ) ( 1 Z N ) ] + Var ( Z ) = ( 2 E [ Z ] N ) ( 1 E [ Z ] N ) + ( 1 2 N 2 ) Var ( Z ) . {\displaystyle {\begin{aligned}\operatorname {Var} (Y)&=\operatorname {E} [\operatorname {Var} (Y\mid Z)]+\operatorname {Var} (\operatorname {E} [Y\mid Z])\\&=E\left[\left({\frac {2Z}{N}}\right)\left(1-{\frac {Z}{N}}\right)\right]+\operatorname {Var} (Z)\\&=\left({\frac {2\operatorname {E} [Z]}{N}}\right)\left(1-{\frac {\operatorname {E} [Z]}{N}}\right)+\left(1-{\frac {2}{N^{2}}}\right)\operatorname {Var} (Z).\end{aligned}}}

If X ( 0 ) = i {\displaystyle X(0)=i} , we obtain

V t = V 1 + ( 1 2 N 2 ) V t 1 . {\displaystyle V_{t}=V_{1}+\left(1-{\frac {2}{N^{2}}}\right)V_{t-1}.}

Rewriting this equation as

V t V 1 2 N 2 = ( 1 2 N 2 ) ( V t 1 V 1 2 N 2 ) = ( 1 2 N 2 ) t 1 ( V 1 V 1 2 N 2 ) {\displaystyle V_{t}-{\frac {V_{1}}{\frac {2}{N^{2}}}}=\left(1-{\frac {2}{N^{2}}}\right)\left(V_{t-1}-{\frac {V_{1}}{\frac {2}{N^{2}}}}\right)=\left(1-{\frac {2}{N^{2}}}\right)^{t-1}\left(V_{1}-{\frac {V_{1}}{\frac {2}{N^{2}}}}\right)}

yields

V t = V 1 1 ( 1 2 N 2 ) t 2 N 2 {\displaystyle V_{t}=V_{1}{\frac {1-\left(1-{\frac {2}{N^{2}}}\right)^{t}}{\frac {2}{N^{2}}}}}

as desired.


The probability that A reaches fixation is called fixation probability. For the simple Moran process this probability is xi = i/N.

Since all individuals have the same fitness, they also have the same chance of becoming the ancestor of the whole population; this probability is 1/N and thus the sum of all i probabilities (for all A individuals) is just i/N. The mean time to absorption starting in state i is given by

k i = N [ j = 1 i N i N j + j = i + 1 N 1 i j ] {\displaystyle k_{i}=N\left[\sum _{j=1}^{i}{\frac {N-i}{N-j}}+\sum _{j=i+1}^{N-1}{\frac {i}{j}}\right]}
For a mathematical derivation of the equation above, click on "show" to reveal

The mean time spent in state j when starting in state i which is given by

k i j = δ i j + P i , i 1 k i 1 j + P i , i k i j + P i , i + 1 k i + 1 j {\displaystyle k_{i}^{j}=\delta _{ij}+P_{i,i-1}k_{i-1}^{j}+P_{i,i}k_{i}^{j}+P_{i,i+1}k_{i+1}^{j}}

Here δij denotes the Kroenecker delta. This recursive equation can be solved using a new variable qi so that P i , i 1 = P i , i + 1 = q i {\displaystyle P_{i,i-1}=P_{i,i+1}=q_{i}} and thus P i , i = 1 2 q i {\displaystyle P_{i,i}=1-2q_{i}} and rewritten

k i + 1 j = 2 k i j k i 1 j δ i j q i {\displaystyle k_{i+1}^{j}=2k_{i}^{j}-k_{i-1}^{j}-{\frac {\delta _{ij}}{q_{i}}}}

The variable y i j = k i j k i 1 j {\displaystyle y_{i}^{j}=k_{i}^{j}-k_{i-1}^{j}} is used and the equation becomes

y i + 1 j = y i j δ i j q i i = 1 m y i j = ( k 1 j k 0 j ) + ( k 2 j k 1 j ) + + ( k m 1 j k m 2 j ) + ( k m j k m 1 j ) = k m j k 0 j i = 1 m y i j = k m j y 1 j = ( k 1 j k 0 j ) = k 1 j y 2 j = y 1 j δ 1 j q 1 = k 1 j δ 1 j q 1 y 3 j = k 1 j δ 1 j q 1 δ 2 j q 2 y i j = k 1 j r = 1 i 1 δ r j q r = { k 1 j j i k 1 j 1 q j j i k i j = m = 1 i y m j = { i k 1 j j i i k 1 j i j q j j i {\displaystyle {\begin{aligned}y_{i+1}^{j}&=y_{i}^{j}-{\frac {\delta _{ij}}{q_{i}}}\\\\\sum _{i=1}^{m}y_{i}^{j}&=(k_{1}^{j}-k_{0}^{j})+(k_{2}^{j}-k_{1}^{j})+\cdots +(k_{m-1}^{j}-k_{m-2}^{j})+(k_{m}^{j}-k_{m-1}^{j})\\&=k_{m}^{j}-k_{0}^{j}\\\sum _{i=1}^{m}y_{i}^{j}&=k_{m}^{j}\\\\y_{1}^{j}&=(k_{1}^{j}-k_{0}^{j})=k_{1}^{j}\\y_{2}^{j}&=y_{1}^{j}-{\frac {\delta _{1j}}{q_{1}}}=k_{1}^{j}-{\frac {\delta _{1j}}{q_{1}}}\\y_{3}^{j}&=k_{1}^{j}-{\frac {\delta _{1j}}{q_{1}}}-{\frac {\delta _{2j}}{q_{2}}}\\&\vdots \\y_{i}^{j}&=k_{1}^{j}-\sum _{r=1}^{i-1}{\frac {\delta _{rj}}{q_{r}}}={\begin{cases}k_{1}^{j}&j\geq i\\k_{1}^{j}-{\frac {1}{q_{j}}}&j\leq i\end{cases}}\\\\k_{i}^{j}&=\sum _{m=1}^{i}y_{m}^{j}={\begin{cases}i\cdot k_{1}^{j}&j\geq i\\i\cdot k_{1}^{j}-{\frac {i-j}{q_{j}}}&j\leq i\end{cases}}\end{aligned}}}

Knowing that k N j = 0 {\displaystyle k_{N}^{j}=0} and

q j = P j , j + 1 = j N N j N {\displaystyle q_{j}=P_{j,j+1}={\frac {j}{N}}{\frac {N-j}{N}}}

we can calculate k 1 j {\displaystyle k_{1}^{j}} :

k N j = i = 1 m y i j = N k 1 j N j q j = 0 k 1 j = N j {\displaystyle {\begin{aligned}k_{N}^{j}=\sum _{i=1}^{m}y_{i}^{j}=N\cdot k_{1}^{j}&-{\frac {N-j}{q_{j}}}=0\\k_{1}^{j}&={\frac {N}{j}}\end{aligned}}}

Therefore

k i j = { i j k j j j i N i N j k j j j i {\displaystyle k_{i}^{j}={\begin{cases}{\frac {i}{j}}\cdot k_{j}^{j}&j\geq i\\{\frac {N-i}{N-j}}\cdot k_{j}^{j}&j\leq i\end{cases}}}

with k j j = N {\displaystyle k_{j}^{j}=N} . Now ki, the total time until fixation starting from state i, can be calculated

k i = j = 1 N 1 k i j = j = 1 i k i j + j = i + 1 N 1 k i j = j = 1 i N N i N j + j = i + 1 N 1 N i j {\displaystyle {\begin{aligned}k_{i}=\sum _{j=1}^{N-1}k_{i}^{j}&=\sum _{j=1}^{i}k_{i}^{j}+\sum _{j=i+1}^{N-1}k_{i}^{j}\\&=\sum _{j=1}^{i}N{\frac {N-i}{N-j}}+\sum _{j=i+1}^{N-1}N{\frac {i}{j}}\end{aligned}}}

For large N the approximation

lim N k i N 2 [ ( 1 x i ) ln ( 1 x i ) + x i ln ( x i ) ] {\displaystyle \lim _{N\to \infty }k_{i}\approx -N^{2}\left[(1-x_{i})\ln(1-x_{i})+x_{i}\ln(x_{i})\right]}

holds.

Selection

If one allele has a fitness advantage over the other allele, it will be more likely to be chosen for reproduction. This can be incorporated into the model if individuals with allele A have fitness f i > 0 {\displaystyle f_{i}>0} and individuals with allele B have fitness g i > 0 {\displaystyle g_{i}>0} where i {\displaystyle i} is the number of individuals of type A; thus describing a general birth-death process. The transition matrix of the stochastic process is tri-diagonal in shape. Let r i := f i / g i {\displaystyle r_{i}:=f_{i}/g_{i}} , then the transition probabilities are

P i , i 1 = g i ( N i ) f i i + g i ( N i ) i N = 1 r i i N + N i N N i N i N P i , i = 1 P i , i 1 P i , i + 1 P i , i + 1 = f i i f i i + g i ( N i ) N i N = r i r i i N + N i N i N N i N {\displaystyle {\begin{aligned}P_{i,i-1}&={\frac {g_{i}\cdot (N-i)}{f_{i}\cdot i+g_{i}\cdot (N-i)}}\cdot {\frac {i}{N}}={\frac {1}{r_{i}\cdot {\frac {i}{N}}+{\frac {N-i}{N}}}}\cdot {\frac {N-i}{N}}\cdot {\frac {i}{N}}\\P_{i,i}&=1-P_{i,i-1}-P_{i,i+1}\\P_{i,i+1}&={\frac {f_{i}\cdot i}{f_{i}\cdot i+g_{i}\cdot (N-i)}}\cdot {\frac {N-i}{N}}={\frac {r_{i}}{r_{i}\cdot {\frac {i}{N}}+{\frac {N-i}{N}}}}\cdot {\frac {i}{N}}\cdot {\frac {N-i}{N}}\\\end{aligned}}}

The entry P i , j {\displaystyle P_{i,j}} denotes the probability to go from state i to state j. The difference to neutral selection above is now that reproduction of an individual with allele B is accepted with probability

f i / g i f i g i i N + N i i , {\displaystyle {\frac {f_{i}/g_{i}}{{\frac {f_{i}}{g_{i}}}\cdot {\frac {i}{N}}+{\frac {N-i}{i}}}},}

and reproduction of an individual with allele A is accepted with probability

1 f i g i i N + N i i , {\displaystyle {\frac {1}{{\frac {f_{i}}{g_{i}}}\cdot {\frac {i}{N}}+{\frac {N-i}{i}}}},}

when the number of individuals with allele B is exactly i.

Also in this case, fixation probabilities when starting in state i is defined by recurrence

x i = { 0 i = 0 β i x i 1 + ( 1 α i β i ) x i + α i x i + 1 1 i N 1 1 i = N {\displaystyle x_{i}={\begin{cases}0&i=0\\\beta _{i}x_{i-1}+(1-\alpha _{i}-\beta _{i})x_{i}+\alpha _{i}x_{i+1}&1\leq i\leq N-1\\1&i=N\end{cases}}}

And the closed form is given by

x i = 1 + j = 1 i 1 k = 1 j γ k 1 + j = 1 N 1 k = 1 j γ k (1) {\displaystyle x_{i}={\frac {\displaystyle 1+\sum _{j=1}^{i-1}\prod _{k=1}^{j}\gamma _{k}}{\displaystyle 1+\sum _{j=1}^{N-1}\prod _{k=1}^{j}\gamma _{k}}}\qquad {\text{(1)}}}

where γ i = P i , i 1 / P i , i + 1 {\displaystyle \gamma _{i}=P_{i,i-1}/P_{i,i+1}} per definition and will just be g i / f i {\displaystyle g_{i}/f_{i}} for the general case.

For a mathematical derivation of the equation above, click on "show" to reveal

Also in this case, fixation probabilities can be computed, but the transition probabilities are not symmetric. The notation P i , i + 1 = α i , P i , i 1 = β i , P i , i = 1 α i β i {\displaystyle P_{i,i+1}=\alpha _{i},P_{i,i-1}=\beta _{i},P_{i,i}=1-\alpha _{i}-\beta _{i}} and γ i = β i / α i {\displaystyle \gamma _{i}=\beta _{i}/\alpha _{i}} is used. The fixation probability can be defined recursively and a new variable y i = x i x i 1 {\displaystyle y_{i}=x_{i}-x_{i-1}} is introduced.

x i = β i x i 1 + ( 1 α i β i ) x i + α i x i + 1 β i ( x i x i 1 ) = α i ( x i + 1 x i ) γ i y i = y i + 1 {\displaystyle {\begin{aligned}x_{i}&=\beta _{i}x_{i-1}+(1-\alpha _{i}-\beta _{i})x_{i}+\alpha _{i}x_{i+1}\\\beta _{i}(x_{i}-x_{i-1})&=\alpha _{i}(x_{i+1}-x_{i})\\\gamma _{i}\cdot y_{i}&=y_{i+1}\end{aligned}}}

Now two properties from the definition of the variable yi can be used to find a closed form solution for the fixation probabilities:

i = 1 m y i = x m 1 y k = x 1 l = 1 k 1 γ l 2 m = 1 i y m = x 1 + x 1 j = 1 i 1 k = 1 j γ k = x i 3 {\displaystyle {\begin{aligned}\sum _{i=1}^{m}y_{i}&=x_{m}&&1\\y_{k}&=x_{1}\cdot \prod _{l=1}^{k-1}\gamma _{l}&&2\\\Rightarrow \sum _{m=1}^{i}y_{m}&=x_{1}+x_{1}\sum _{j=1}^{i-1}\prod _{k=1}^{j}\gamma _{k}=x_{i}&&3\end{aligned}}}

Combining (3) and xN = 1:

x 1 ( 1 + j = 1 N 1 k = 1 j γ k ) = x N = 1. {\displaystyle x_{1}\left(1+\sum _{j=1}^{N-1}\prod _{k=1}^{j}\gamma _{k}\right)=x_{N}=1.}

which implies:

x 1 = 1 1 + j = 1 N 1 k = 1 j γ k {\displaystyle x_{1}={\frac {1}{1+\sum _{j=1}^{N-1}\prod _{k=1}^{j}\gamma _{k}}}}

This in turn gives us:

x i = 1 + j = 1 i 1 k = 1 j γ k 1 + j = 1 N 1 k = 1 j γ k {\displaystyle x_{i}={\frac {\displaystyle 1+\sum _{j=1}^{i-1}\prod _{k=1}^{j}\gamma _{k}}{\displaystyle 1+\sum _{j=1}^{N-1}\prod _{k=1}^{j}\gamma _{k}}}}

This general case where the fitness of A and B depends on the abundance of each type is studied in evolutionary game theory.

Less complex results are obtained if a constant fitness ratio r = 1 / γ i {\displaystyle r=1/\gamma _{i}} , for all i, is assumed. Individuals of type A reproduce with a constant rate r and individuals with allele B reproduce with rate 1. Thus if A has a fitness advantage over B, r will be larger than one, otherwise it will be smaller than one. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

P 0 , 0 = 1 P i , i 1 = N i r i + N i i N = 1 r i N + N i N N i N i N P i , i = 1 P i , i 1 P i , i + 1 P i , i + 1 = r i r i + N i N i N = r r i N + N i N i N N i N P N , N = 1. {\displaystyle {\begin{aligned}P_{0,0}&=1\\P_{i,i-1}&={\frac {N-i}{r\cdot i+N-i}}\cdot {\frac {i}{N}}={\frac {1}{r\cdot {\frac {i}{N}}+{\frac {N-i}{N}}}}\cdot {\frac {N-i}{N}}\cdot {\frac {i}{N}}\\P_{i,i}&=1-P_{i,i-1}-P_{i,i+1}\\P_{i,i+1}&={\frac {r\cdot i}{r\cdot i+N-i}}\cdot {\frac {N-i}{N}}={\frac {r}{r\cdot {\frac {i}{N}}+{\frac {N-i}{N}}}}\cdot {\frac {i}{N}}\cdot {\frac {N-i}{N}}\\P_{N,N}&=1.\end{aligned}}}

In this case γ i = 1 / r {\displaystyle \gamma _{i}=1/r} is a constant factor for each composition of the population and thus the fixation probability from equation (1) simplifies to

x i = 1 r i 1 r N x 1 = ρ = 1 r 1 1 r N (2) {\displaystyle x_{i}={\frac {1-r^{-i}}{1-r^{-N}}}\quad \Rightarrow \quad x_{1}=\rho ={\frac {1-r^{-1}}{1-r^{-N}}}\qquad {\text{(2)}}}

where the fixation probability of a single mutant A in a population of otherwise all B is often of interest and is denoted by ρ.

Also in the case of selection, the expected value and the variance of the number of A individuals may be computed

E [ X ( t ) X ( t 1 ) = i ] = p s 1 p p s + 1 + i Var ( X ( t + 1 ) X ( t ) = i ) = p ( 1 p ) ( s + 1 ) + ( p s + 1 ) 2 ( p s + 1 ) 2 {\displaystyle {\begin{aligned}\operatorname {E} [X(t)\mid X(t-1)=i]&=ps{\dfrac {1-p}{ps+1}}+i\\\operatorname {Var} (X(t+1)\mid X(t)=i)&=p(1-p){\dfrac {(s+1)+(ps+1)^{2}}{(ps+1)^{2}}}\end{aligned}}}

where p = i/N, and r = 1 + s.

For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows

E [ Δ ( 1 ) X ( 0 ) = i ] = ( i 1 i ) P i , i 1 + ( i i ) P i , i + ( i + 1 i ) P i , i + 1 = N i r i + N i i N + r i r i + N i N i N = ( N i ) i ( r i + N i ) N + i ( N i ) ( r i + N i ) N + s i ( N i ) ( r i + N i ) N = p s 1 p p s + 1 E [ X ( t ) X ( t 1 ) = i ] = p s 1 p p s + 1 + i {\displaystyle {\begin{aligned}\operatorname {E} [\Delta (1)\mid X(0)=i]&=(i-1-i)\cdot P_{i,i-1}+(i-i)\cdot P_{i,i}+(i+1-i)\cdot P_{i,i+1}\\&=-{\frac {N-i}{ri+N-i}}{\frac {i}{N}}+{\frac {ri}{ri+N-i}}{\frac {N-i}{N}}\\&=-{\frac {(N-i)i}{(ri+N-i)N}}+{\frac {i(N-i)}{(ri+N-i)N}}+{\frac {si(N-i)}{(ri+N-i)N}}\\&=ps{\dfrac {1-p}{ps+1}}\\\operatorname {E} [X(t)\mid X(t-1)=i]&=ps{\dfrac {1-p}{ps+1}}+i\end{aligned}}}

For the variance the calculation runs as follows, using the variance of a single step

Var ( X ( t + 1 ) X ( t ) = i ) = Var ( X ( t ) ) + Var ( Δ ( t + 1 ) X ( t ) = i ) = 0 + E [ Δ ( t + 1 ) 2 X ( t ) = i ] E [ Δ ( t + 1 ) X ( t ) = i ] 2 = ( i 1 i ) 2 P i , i 1 + ( i i ) 2 P i , i + ( i + 1 i ) 2 P i , i + 1 E [ Δ ( t + 1 ) X ( t ) = i ] 2 = P i , i 1 + P i , i + 1 E [ Δ ( t + 1 ) X ( t ) = i ] 2 = ( N i ) i ( r i + N i ) N + ( N i ) i ( 1 + s ) ( r i + N i ) N E [ Δ ( t + 1 ) X ( t ) = i ] 2 = i ( N i ) 2 + s ( r i + N i ) N E [ Δ ( t + 1 ) X ( t ) = i ] 2 = i ( N i ) 2 + s ( r i + N i ) N ( p s 1 p p s + 1 ) 2 = p ( 1 p ) 2 + s ( p s + 1 ) ( p s + 1 ) 2 p ( 1 p ) p s 2 ( 1 p ) ( p s + 1 ) 2 = p ( 1 p ) 2 + 2 p s + s + p 2 s 2 ( p s + 1 ) 2 {\displaystyle {\begin{aligned}\operatorname {Var} (X(t+1)\mid X(t)=i)&=\operatorname {Var} (X(t))+\operatorname {Var} (\Delta (t+1)\mid X(t)=i)\\&=0+E\left[\Delta (t+1)^{2}\mid X(t)=i\right]-\operatorname {E} [\Delta (t+1)\mid X(t)=i]^{2}\\&=(i-1-i)^{2}\cdot P_{i,i-1}+(i-i)^{2}\cdot P_{i,i}+(i+1-i)^{2}\cdot P_{i,i+1}-\operatorname {E} [\Delta (t+1)\mid X(t)=i]^{2}\\&=P_{i,i-1}+P_{i,i+1}-\operatorname {E} [\Delta (t+1)\mid X(t)=i]^{2}\\&={\frac {(N-i)i}{(ri+N-i)N}}+{\frac {(N-i)i(1+s)}{(ri+N-i)N}}-\operatorname {E} [\Delta (t+1)\mid X(t)=i]^{2}\\&=i(N-i){\frac {2+s}{(ri+N-i)N}}-\operatorname {E} [\Delta (t+1)\mid X(t)=i]^{2}\\&=i(N-i){\frac {2+s}{(ri+N-i)N}}-\left(ps{\dfrac {1-p}{ps+1}}\right)^{2}\\&=p(1-p){\frac {2+s(ps+1)}{(ps+1)^{2}}}-p(1-p){\frac {ps^{2}(1-p)}{(ps+1)^{2}}}\\&=p(1-p){\dfrac {2+2ps+s+p^{2}s^{2}}{(ps+1)^{2}}}\end{aligned}}}

Rate of evolution

In a population of all B individuals, a single mutant A will take over the whole population with the probability

ρ = 1 r 1 1 r N . (2) {\displaystyle \rho ={\frac {1-r^{-1}}{1-r^{-N}}}.\qquad {\text{(2)}}}

If the mutation rate (to go from the B to the A allele) in the population is u then the rate with which one member of the population will mutate to A is given by N × u and the rate with which the whole population goes from all B to all A is the rate that a single mutant A arises times the probability that it will take over the population (fixation probability):

R = N u ρ = u if ρ = 1 N . {\displaystyle R=N\cdot u\cdot \rho =u\quad {\text{if}}\quad \rho ={\frac {1}{N}}.}

Thus if the mutation is neutral (i.e. the fixation probability is just 1/N) then the rate with which an allele arises and takes over a population is independent of the population size and is equal to the mutation rate. This important result is the basis of the neutral theory of evolution and suggests that the number of observed point mutations in the genomes of two different species would simply be given by the mutation rate multiplied by two times the time since divergence. Thus the neutral theory of evolution provides a molecular clock, given that the assumptions are fulfilled which may not be the case in reality.

See also

References

  1. ^ Moran, P. A. P. (1958). "Random processes in genetics". Mathematical Proceedings of the Cambridge Philosophical Society. 54 (1): 60–71. doi:10.1017/S0305004100033193.

Further reading

  • Nowak, Martin A. (2006). Evolutionary Dynamics: Exploring the Equations of Life. Belknap Press. ISBN 978-0-674-02338-3.
  • Moran, Patrick Alfred Pierce (1962). The Statistical Processes of Evolutionary Theory. Oxford: Clarendon Press.

External links

  • "Evolutionary Dynamics on Graphs".
  • v
  • t
  • e
Discrete timeContinuous timeBothFields and otherTime series modelsFinancial modelsActuarial modelsQueueing modelsPropertiesLimit theoremsInequalitiesToolsDisciplines