Open Access

Stochasticity and noise-induced transition of genetic toggle switch

Journal of Uncertainty Analysis and Applications20142:1

DOI: 10.1186/2195-5468-2-1

Received: 12 July 2013

Accepted: 27 November 2013

Published: 8 January 2014

Abstract

The ability to predict and analyze the function of genetic circuits will enhance the design of autonomous, programmable, complex regulatory genetic structures. An abundance of modeling techniques has recently been developed to delineate simple genetic structures in terms of their constituents. Simple systems with characteristics of feedback inhibition, multi-stability, switching, and oscillatory expression have often been the focus. The present work is an attempt to improve existing deterministic models that fail to oblige to the crucial aspect of noise in genetic modeling.

The objective of this work is to analyze, model, and simulate the protein populations in gene expression mechanisms by resorting to stochastic algorithms. The system involves two types of genes; the protein produced from the expression of one gene is capable of turning off the expression of the other gene. Rates of degradation of these proteins are assumed to be proportional to their concentrations. The master equation of this ‘genetic toggle switch’ is formulated using the probabilistic population balance around a particular state and by considering five mutually exclusive events. The efficacy of the present methodology is mainly attributable to the ability to derive the governing equations for the means, variances, and covariance of the random variables by the method of system-size expansion of the nonlinear master Equation. A less laborious approach based on Kurtz’s limit theorems for the derivation of the stochastic characteristics is also presented for comparison. Solving the resultant ordinary differential equations governing the means, variances, and covariance of the master equations simultaneously using the published data yield information concerning not only the means of the two populations of proteins but also the minimal uncertainties of the populations inherent in the expressions. It is demonstrated that systems with small populations are susceptible to large internal fluctuations (or uncertainties) in their population evolution. Large uncertainties are observed after the populations enter the proximity of the saddle node, which is likely to cause transition of system’s steady state from one to another. Independent Monte-Carlo simulation runs clearly demonstrates that the occurrence of such internal noise-induced transition.

Keywords

Stochastic Nonlinear Model Genetic networks Genetic circuits Metastable Noise-induced transitions

Introduction

One of the earliest examples of a bistable genetic switch is represented in the rightward operator of bacteriophage lambda [1, 2]. The essential elements of this type of genetic switch, are a pair of promoters that each produces a repressor protein capable of inhibiting the production of the opposing repressor. Overlayed on these essential elements are several layers of regulatory nuance. To elucidate the impacts of these essential elements of a simplified regulatory circuit, a series of synthetic toggle switches were created.

Figure 1 shows the two-state genetic toggle switch consisting of two protein repressor genes and two promoters, which was investigated by Gardner et al. [3]. Each promoter enables the production of one repressor and is inhibited by the other. They elegantly designed experiments that demonstrated switching of a toggle circuit from one steady state to another by switching system’s parameters across the bifurcation curve to a bistable region through either thermal inactivation of Repressor A or ligand binding-induced dissociation of the Repressor B-DNA complex. In the proximity of the bifurcation point, the final steady-state protein population possesses a bimodal distribution in their green fluorescent protein (GFP) fluorescence. It does not have a sharp jump from one fluorescence level to another, as the deterministic model predicts. The authors surmise that the stochastic nature of the dynamics blurs the bifurcation point.
Figure 1

Schematic diagram of the toggle switch in its two stable states. Both gene A and gene B produce a dimeric repressor protein that binds to the controlling element to prevent the expression of the opposing gene. At any given time step, the pool of each repressor monomer can increase, decrease due to degradation, or remain the same.

McAdams and Arkin’s [4, 5] Monte-Carlo simulations of gene expression revealed the importance of fluctuations, or noises or uncertainties, of small systems. In such small systems, proteins are produced from an activated promoter in short bursts of variable numbers of proteins that occur at random time intervals. As a result, there can be large differences in the time between successive events in regulatory cascades across a cell population, which, in turn, creates both special and temporal heterogeneity of cell populations in biological systems. Soon after the discovery of the potential impacts of the stochasticity of genetic regulatory system, stochastic algorithms developed by chemical physicists have been introduced in analyzing gene expression (e.g., [6, 7]). The stochastic nature of a competitive expression mechanism can produce probabilistic outcomes in switching mechanisms that select between alternative regulatory paths, such as toggle switch.

Stochastic algorithms have been developed for analyzing noise of different origins and internal and external noises (e.g., [810]). External noises are the fluctuations created in an otherwise deterministic system by the application of an external random force, whose stochastic properties are supposed to be known. A Langevine equation is commonly adopted in the analysis of dynamics caused by external noises. Internal noise arises from discrete systems where only a limited number of variables affecting the populations of the discrete entities can be included in the analysis. Small discrete systems, such as genes of small populations, often exhibit notable internal fluctuations. A master equation, derived from probabilistic population balance around a particular state of the system by taking into account all mutually exclusive events, has been adopted this type of discrete state, continuous-time stochastic processes.

The stochasticity of gene expression is complicated by its nonlinearity. Multiple steady states, stability, and bifurcation in gene expressions (e.g., [11]) could mingle with the analysis of noise, or fluctuations. The efficacy of the master equation algorithm in gene expression is mainly attributable to its powerful ability to solve the nonlinear master equations through the system size expansion [9, 12]. In this approach, a suitable expansion parameter must be identified in the master equation. The expansion parameter represents the size of the fluctuations, and therefore, the magnitude of the jumps, or transitions, of system’s state. Since the internal noises are expected to be low when the system size is large, the system size has been proposed as an expansion parameter. Master equation formulation along with the system-size expansion has indeed applied to the analysis of noise in gene expression. It should be mentioned that the limit theorems of Kurtz [1315] have rendered the complex procedure of system size expansion simple and highly accessible. Kurtz’s proof demonstrated the solution of a Langevine equation approaches to van Kampen’s system size expansion as the system size approaches to infinite.

Kepler and Elston [16] examined the stochastic dynamics of the single-gene system with and without feedback and a switching system composed of two mutually repressed genes. Several assumptions were made in their simplified model: the two genes share the same operator and same degradation rate, proteins bind to the operator as dimers, and rate of dimerization is fast. Both master equation and Monte-Carlo simulation were adopted in their study. Scott et al. [17] adopted the master equation along with the system size expansion algorithm in the estimation of internal noise of the single-gene system that involves the mRNA formation and degradation and protein formation and degradation.

The system size expansion has several limitations in modeling the gene regulatory process. It is a good approximation to the master equation for small internal noise and large system size. Moreover, the noise should be well within the boundary of attraction [9]. Thus, noises in oscillatory process and those away from the steady states have been a focus of several studies. Tao et al. [18] studied the noise far from the steady states and revealed that during the approach to equilibrium, the noise is not always reduced by the strength of the feedback. This is contrary to results seen in the equilibrium limit which show decreased noise with feedback strength. Ito and Uchida [19] found that the internal noise of a regulatory single-gene system grows without bound in oscillatory networks and developed an alternative method for estimating the evolution of internal noise in such systems.

Kepler and Elston’s simulation work [16] demonstrated that simple noisy genetic switch have rich bifurcation structures. Among them, bifurcations driven solely by changing the rate of operator fluctuations even as the underlying deterministic system remains unchanged. They find stochastic bistability where the deterministic equations predict monostability and vice versa. Ochab-Marcinek [20] investigated the stationary behavior of a nonlinear system, a reduced, deterministic Yildirim and Mackey [21] model of the gene regulatory system, and discovered the transition of a steady state induced by noise. A perturbed Gaussian white noise term was introduced in the deterministic model followed by numerical simulations. Turcotte et al. [22] studied noise-induced stabilization of an unstable state of a genetic switch that undergoes a variety of bifurcations in response to parameter changes. Their Monte Carlo simulations showed that near one such bifurcation, noise induces oscillations around an unstable spiral point and thus effectively stabilizes this unstable fixed point.

In addition to the master equation algorithm, Monte Carlo simulation has been adopted in simulating the dynamic behaviors in genetic regulatory systems under the influences of internal noise (e.g., [11, 23, 24]). The Monte Carlo simulation shares the same assumption, the Markov property, as the master equation, and the noise can be obtained directly from master equation’s deterministic counterpart. Moreover, the Monte Carlo simulation is capable of revealing the various characteristics of nonlinear dynamic system, such as the number of steady states, bifurcation, and internal noises.

In this expositional work, the master equations are formulated by stochastic population balance. Van Kampen’s system size expansion of the resultant nonlinear master equation gives rise to the variances of the processes. We demonstrate the implementation of Kurtz’s limit theorems can efficiently achieve the same goal. Simulations are conducted based on both the master equations and the Monte Carlo procedure for three systems: bistable, monostable, and on the bifurcation curve. Finally, we demonstrate the possibility of transition induced by internal noises for a bistable system.

Model formulation

A genetic toggle switch with negative feedback to the genes consists of two mutually coupled genes. The transcription products of these genes are two inhibitory repressor proteins competing to shut off the production of two constitutive promoters [1, 3, 25]; the protein transcribed by a gene of one type is capable of deactivating the transcription of the other gene. A toggle switch typically has more than one possible stable steady state depending on the reaction parameters under consideration [3]. There are a number of instances in nature where this switch-like behavior is utilized. The lysogeny/lysis switch of the bacteriophage λ virus infecting the bacterium Escherichia coli is a representative example and has been discussed in detail by Ptashne [1] and Ptashne and Gann [25].

Gardner [26] discussed results generated from their deterministic model of a negative feedback toggle switch. Each type of the repressor protein is involved in two types of processes. The first process corresponds to the production of the protein. The rate of protein production is proportional to the concentration of mRNA, which, in turn, is proportional to the concentration of the un-repressed gene, G. The repressor binding on un-repressed gene is commonly assumed to be in a quasi-steady state with the repressor, R, and the repressed gene, GR m , i.e.,
G + m R G R m

Moreover, by assuming the total number of un-repressed genes is much larger than that of R so that G remains constant during the process, it can be shown that the rate of production of protein is proportional to 1 1 + K R m where K is the equilibrium constant of the above reaction and R the concentration of the repressor monomer [2729]. The second process in the model of Gardner et al. is degradation of the protein that is assumed to be first order.

Similar to the work of Gardner, we will assume that the genes are in equilibrium with their repressed genes in the current work. The stochastic nature of a competitive expression mechanism can produce probabilistic outcomes in switching mechanisms that select between alternative regulatory paths, such as toggle switch.

The master equation describing the stochastic nature of the toggle switch is developed through the probabilistic population balance. The formulation of the master equation given below follows what Oppenheim et al. [8], Gardiner [10], and van Kampen [9] established. We have previously adopted this algorithm in the analysis of disease spread [30].

Mathematical assumptions

Let the random variables, N1(t) and N2(t) represent the populations of the repressor protein R1 and repressor protein R2 at time t, respectively. The random vector of the system is N(t) such that N(t) = [N1(t), N2(t)] and the realization of this random vector representing the state of the system at time t is given by n(t), where n(t) = [n1(t), n2(t)]. Moreover, the probability of the system to be in state n at time t is denoted by Pn 1,n 2(t) or P[n1(t), n2(t);t]. The following assumptions are imposed in driving the master equation governing the transition of the system among various states.
  1. 1.

    The random vector, N(t), is Markovian, i.e., for any set of successive times, t 1 < t 2 < … < t q , we have P [N(t q ) * N(t 1), N(t 2), , N(t q−1)] = P [N(t q ) * N(t q−1)].

     
  2. 2.

    The number of increments or decrements in population numbers of the classes depends only on the time interval, Δt, but not time, i.e., it is temporally homogeneous, signifying that Nt)and [N(t + Δt) − N(t)] are identically distributed.

     
  3. 3.

    The probability of an individual to produce or degrade is proportional to the duration of time interval, (t, t + Δt), if the value of Δt is sufficiently small.

     
  4. 4.

    The probabilities of two or more transitions to take place are negligible during the time interval, (t, t + Δt), so that at most, one transition occurs during this period.

     
  5. 5.

    Individual proteins in the same class have the same probability of contacting the genes, and therefore, have the same probability of repressing the genes. Similarly, the individual proteins in the same class have the same probability of being degraded.

     

Transition intensity functions

On the basis of the assumptions given in the proceeding subsection, the transition probability of each event can be written in terms of the transition intensity functions, k1, k2, α2, and α4, as follows:

The first transition-intensity function, k1, is the production probability of a type-1 repressor protein from a particular active (not repressed) gene of type-1 per unit time. Based on the assumption of temporal homogeneity, we have
Pr a particular active gene of type 1 will produce a repressor protein of type 1 during the time interval , t , t + Δt = k 1 Δt + o Δt ,
where lim Δt 0 o Δt Δt = 0 . By considering all active type-1 genes in the system, the probability that the population of the type-1 protein will increase by one is k1Ga 1, where Ga 1 denotes the number of active gene of type-1, i.e., the genes that are not repressed. Mathematically,
Pr a type 1 protein will be produced during the time interval , t , t + Δt = k 1 G a 1 Δt + o Δt = α 1 f 1 Δt + o Δt

where f1 is the ratio of populations of active gene to total, active and repressed, genes of type-1. In writing the last line of the above statement, we assume that the total number of gene remains constant during the process of interest. Thus, the parameter, α1, is the probability that a particular active gene will transcribe and produce a type-1 protein per unit time multiplied by the total number of genes.

For a negative feedback genetic circuit, Goodwin [27, 28] and Griffith [29] showed that
f 1 = 1 1 + K a n 2 m ,
where K a is the equilibrium constant of the combination reaction of the active gene of type-1 and repressor, a m-mer, and m is the number of protein monomers of type-2 in the repressor. Combining the last two equations yields
Pr a type 1 protein will be produced in time interval , t , t + Δt = α 1 1 + K a n 2 m Δt + o Δt
(1)
The second transition intensity function, α2, is the overall consumption probability of a particular active protein of type-1 in time interval, (t, t + Δt), including its function in repressing protein type-2. Mathematically,
Pr a particular repressor protein of type 1 will be consumed in time interval , t , t + Δt = α 2 Δt + o Δt
By considering all repressor protein of type-1 genes in the system, the probability that the population of the type-1 protein will decrease by one is α2n1, or,
Pr a type 1 repressor protein will be consumed in time interval , t , t + Δt = α 2 n 1 Δt + o Δt
(2)
By analogy, the third transition intensity function, k2, is the production probability of a type-2 repressor protein from a particular active (not repressed) gene of type-2 per unit time, or,
Pr a particular active gene of type 2 will produce a repressor protein of type 2 during time interval , t , t + Δt = k 2 Δt + o Δt
This definition will lead to the following transition probability:
Pr a type 2 protein will be produced during the time interval , t , t + Δt = k 2 G a 2 Δt + o Δt = α 3 f 2 Δt + o Δt = α 3 1 + K b n 1 M Δt + o Δt
(3)
where Ga 2 denotes the number of active gene of type-2, f2 the ratio of populations of active gene to total, active and repressed, genes of type-2, or,
f 2 = 1 1 + K b n 1 M ,

K b the equilibrium constant of the combination reaction of the active gene of type-2 and repressor of type-1, a M-mer, and M is the number of protein monomers of type-1 in the repressor.

Also by analogy, the fourth transition-intensity function, α4, is the consumption probability of a particular active protein of type-2 during the time interval, (t, t + Δt), or,
Pr a particular repressor protein of type 2 will be consumed in time interval , t , t + Δt = α 4 Δt + o Δt
By considering all repressor protein of type-2 genes in the system, we have,
Pr a type 2 repressor protein will be consumed in time interval , t , t + Δt = α 4 n 2 Δt + o Δt
(4)

It should be noted that the rates adopted in deterministic models and discussed earlier in the outset of the ‘Model Formulation’ section are used in defining the transition intensity functions below. The transition intensity functions have pivotal importance in master equation models and Monte Carlo simulations. More importantly, the adoption of deterministic rate constants in master equation is a cornerstone in the interpretation of intrinsic (or internal) noise van Kampen [9].

Master equations

Based on the transition intensity functions defined above, the master equation can be obtained by taking probability balance of the following five mutually exclusive events leading to the evolution of the state of the system:

  • a R1 is produced while R2 remains constant

  • a R1 is degraded while R2 remains constant

  • a R2 is produced while R1 remains constant

  • a R2 is degraded while R1 remains constant

  • both R1 and R2 remain the same.

As illustrated in Figure 2, the probabilities that these five exclusive events will lead the system to state n at arbitrary time (t + Δt) can be written as follows:
Pr the system will transfer from state n t = n 1 1 , n 2 into state n t + Δt = n 1 , n 2 due to the production of one type 1 protein in time interval , t , t + Δt = W n t + Δt , n t P n 1 1 , n 2 t Δt = W n 1 , n 2 , n 1 1 , n 2 P n 1 1 , n 2 t Δt = α 1 1 + K a n 2 m P n 1 1 , n 2 t Δt + o Δt
(5)
where W n , n t is the conditional probability of the system transition from state n′(t) to state n(t + Δt) per unit time.
Pr the system will transfer from state into state n t = n 1 + 1 , n 2 into state n t + Δ t = n 1 , n 2 due to the degradation of a type 1 protein in time interval , t , t + Δ t = W n t + Δt , n t P n 1 + 1 , n 2 t Δt = W n 1 , n 2 , n 1 + 1 , n 2 P n 1 + 1 , n 2 t Δt = n 1 + 1 α 2 P n 1 + 1 , n 2 t Δt + o Δt
(6)
Pr the system will transfer from state into state n t = n 1 , n 2 1 into state n t + Δ t = n 1 , n 2 due to production of one type 2 protein in time interval , t , t + Δt = W n t + Δt , n t P n 1 , n 2 1 t Δt = W n 1 , n 2 , n 1 , n 2 1 P n 1 , n 2 1 t Δt = α 3 1 + K b n 1 M P n 1 , n 2 1 t Δt + o Δt
(7)
Pr the system will transfer from state n t = n 1 , n 2 + 1 into state n t + Δ t = n 1 , n 2 due to the degradation of a type 2 protein in time interval , t , t + Δt = W n t + Δt , n t P n 1 , n 2 + 1 t Δt = W n 1 , n 2 , n 1 , n 2 + 1 P n 1 , n 2 + 1 t Δt = n 2 + 1 α 4 P n 1 , n 2 + 1 t Δt + o Δt
(8)
Pr the system will remain at state n = n 1 , n 2 during the time interval , t , t + Δt = W n t + Δt , n t P n 1 , n 2 t Δt = W n 1 , n 2 , n 1 , n 2 P n 1 , n 2 t Δt = 1 α 1 1 + K a n 2 m + n 1 α 2 + α 3 1 + K b n 1 M + n 2 α 4 Δt P n 1 , n 2 t + o t
(9)
Figure 2

Probabilistic population balance involving five mutually exclusive events in time interval, ( t , t+ Δ t ).

Since these five events are mutually exclusive, we have
P n 1 , n 2 t + Δt = W n 1 , n 2 , n 1 1 , n 2 P n 1 1 , n 2 t + W n 1 , n 2 , n 1 + 1 , n 2 P n 1 + 1 , n 2 t + W n 1 , n 2 , n 1 , n 2 1 P n 1 , n 2 1 t + W n 1 , n 2 , n 1 , n 2 + 1 P n 1 , n 2 + 1 t + W n 1 , n 2 , n 1 , n 2 P n 1 , n 2 t
By substituting all the transition probabilities discussed in Equations 5 through 9 into the above expression, we obtain the probability of the system at state n at arbitrary time (t + Δt) as follows:
P n 1 , n 2 t + Δt t = α 1 1 + K a n 2 m P n 1 1 , n 2 t Δt + n 1 + 1 α 2 P n 1 + 1 , n 2 t Δ + α 3 1 + K b n 1 M P n 1 , n 2 1 t Δt + n 2 + 1 α 4 P n 1 , n 2 + 1 t Δt + 1 α 1 1 + K a n 2 m + n 1 α 2 + α 3 1 + K b n 1 M + n 2 α 4 Δt P n 1 , n 2 t
By rearranging the above equation and taking the limit as Δt → 0, we obtain the following master equation:
d P n 1 , n 2 t dt = α 1 1 + K a n 2 m P n 1 1 , n 2 t + n 1 + 1 α 2 P n 1 + 1 , n 2 t + α 3 1 + K b n 1 m P n 1 , n 2 1 t + n 2 + 1 α 4 P n 1 , n 2 + 1 t α 1 1 + K a n 2 m + n 1 α 2 + α 3 1 + K b n 1 M + n 2 α 4 P n 1 , n 2 t
(10)
For convenience, the one-step operator, E, is defined through its effect on arbitrary function f(n) as van Kampen [9]:
E f n = f n + 1 and E 1 f n = f n 1
(11)
The master equation is rewritten compactly in terms of the one-step operator as follows:
d P n 1 , n 2 t dt = α 1 1 + K a n 2 m E n 1 1 1 P n 1 , n 2 t + E n 1 1 α 2 n 1 P n 1 , n 2 t + α 3 1 + K b n 1 M E n 2 1 1 P n 1 , n 2 t + E n 2 1 α 4 n 2 P n 1 , n 2 t
(12)

The solution to the equation with the step operator yields the time-dependent joint probability distribution of the populations of repressor proteins.

System-size expansion based on van Kampen’s procedure

The approximation of the master equation, Equation 10 or 12, leads the evolution of the joint probability distribution of the populations of the two competing repressors, P n (t). Equation 10 comprises a set of ordinary differential equations with the joint probability function, P n (t), as its unknown. Each equation in the set represents a particular outcome of n; thus, solving Equation 12 for the joint probability distribution of an exceedingly large number of all possible n s is extremely difficult, if not impossible. In practice, however, it often suffices to determine only the expressions that govern a limited number of moments, especially the first and second moments, of the resultant population distribution. These expressions yield the means, variances, and covariances that can be correlated or compared with the experimental data.

Moreover, Equation 12 is nonlinear, which prevents the moments from being evaluated by averaging techniques or joint probability generating function techniques [9]. This difficulty is circumvented by resorting to the system-size expansion, a rational approximation technique based on the power series expansion [9, 12, 31]. The technique gives rise to the deterministic macroscopic equations as well as the equations of fluctuations for the master equation.

To apply the system-size expansion, a suitable expansion parameter must be identified in the master equation, specifically in the transition intensity functions. The expansion parameter must govern the size of the fluctuations, and therefore, the magnitude of the jumps, or transitions. The macroscopic features are determined by the average behavior of all particles, while internal fluctuations are caused by the discrete nature of matter. Hence, we expect the fluctuations to be relatively small when the system size is large. The system size, Ω, has been proposed as an expansion parameter because it measures the relative importance of the fluctuations [9, 12, 31]. In the current genetic regulatory network, the total initial number of promoter population, or the total number of initial reactants, is chosen as Ω so that the noises estimated based on both the master equation and Monte Carlo simulations discussed below represent the standard deviations from the means.

For a linear system, fluctuations are of the order of Ω1/2 in a collection of Ω entities. As a result, their effect on the macroscopic properties is of the order of Ω−1/2[9, 12]. In the system under consideration, therefore, we expect that the joint probability, P n (t), will have a sharp maximum around the macroscopic value, n(t) = ΩΘ(t), with a width of the order of Ω1/2. Here, Θ(t) is a vector where elements are the mean numbers of the two protein populations, (t) and θ(t) obtained through the solution of the macroscopic equations as will be elaborated later. To exploit these characteristics of the system, a new random vector Y(t) is defined as follows:
N 1 t = Ω ϕ t + Ω 1 / 2 Y 1 t
(13)
N 2 t = Ω θ t + Ω 1 / 2 Y 2 t
(14)
The equations of realizations of these expressions are given, respectively, by
n 1 t = Ωϕ t + Ω 1 / 2 y 1 t
(15)
n 2 t = Ωθ t + Ω 1 / 2 y 2 t
(16)

Accordingly, the joint probability of n1 and n2 i.e., P n (t), is now transformed into that of y1 and y2, i.e., Ψ y (t). Subsequently, the new random vector, Y, the new joint probability distribution, Ψ y (t), and the definition of the one-step operator, E, Equation 11, are substituted into Equation 12. By expanding the right-hand side of the resultant expression into a Taylor’s series, the master equation in terms of the new variables is obtained, see Appendix 1. All appendices to this paper can be found in the supporting materials for this Journal.

Collecting the terms of order Ω1/2 in the right-hand side of the expanded equation gives rise to the following expressions governing the evolution of the macroscopic equation of the system:
dt = α 1 1 + K a θ t m α 2 ϕ t
(17)
dt = α 3 1 + K b ϕ t M α 4 θ t
(18)
where the constants, α1, K a , α3, and K b correspond respectively to the parameters α1, K a , α3, and K b , normalized with Ω or a specific power of Ω so that collected terms in system size expansion have the same order of magnitude, i.e.,
α 1 = α 1 Ω , K a = K a Ω m
(19)
α 3 = α 3 Ω , K b = K b Ω M
(20)

Equations 17 and 18 are of the same forms as the macroscopic equations of Gardner [26].

Similarly, by collecting the terms of order Ω0 gives rise to the following linear Fokker-Plank equation [9], see Appendix 1, that governs the first and the second moments associated with the fluctuations of the system:
Ψ t = i , j A i j y i y j Ψ + 1 2 i , j B i j 2 Ψ y i y j
(21)
where the two matrices A and B are
A = A 11 A 12 A 21 A 22 = α 2 α 1 K a m θ t m 1 1 + K a θ t m 2 α 3 K b M ϕ t M 1 1 + K b ϕ t M 2 α 4
(22)
B = B 11 B 12 B 21 B 22 = α 1 1 + K a θ t m + α 2 ϕ t 0 0 α 3 1 + K b ϕ t M + α 4 θ t
(23)

A Fokker-Planck equation is considered linear if the coefficient matrix A, the drift term, is a linear function of Y and the coefficient matrix B, the diffusion term, is constant [9]. Note that the macroscopic trajectories, N and 2, are functions of t only and they can be obtained by integrating Equations 17 and 18. Thus, the coefficients of the equation governing the fluctuations, A and B in Equations 22 and 23, are independent of the fluctuations, Y. For a linear Fokker-Planck equation, the ordinary differential equations governing the means and variances of the fluctuations, Y, can be derived by taking the first and second moments of Equation 21.

Taking the first moment of Equation 21 yields the expression governing the mean of the fluctuations, Y:
d dt E Y k = j = 1 2 A kj E Y j , k = 1 , 2
By substituting Equations 22 and 23 into the above expression gives rise to
d dt E Y 1 = A 11 E Y 1 + A 12 E Y 2 = α 2 E Y 1 α 1 K a m θ t m 1 1 + K a θ t m 2 E Y 2
(24)
d dt E Y 2 = A 21 E Y 1 + A 22 E Y 2 = α 3 K b M ϕ t M 1 1 + K b ϕ t M 2 E Y 1 α 4 E Y 2
(25)
Similarly, taking the second moment of Equation 21 yields the expression governing the second moment of the fluctuations, Y:
d dt E Y i Y j = k = 1 2 A ik E Y k Y j + k = 1 2 A jk E Y i Y k + B ij
By substituting Equations 22 and 23 into the above expression gives rise to
d dt E Y 1 2 = 2 A 11 E Y 1 2 + 2 A 12 E Y 1 Y 2 + B 11 = 2 α 2 E Y 1 2 α 1 K a m θ t m 1 1 + K a θ t m 2 E Y 1 Y 2 + α 1 1 + K a θ t m + α 2 ϕ t
(26)
d dt E Y 2 2 = 2 A 22 E Y 2 2 + 2 A 21 E Y 1 Y 2 + B 22 = 2 α 4 E Y 2 2 α 3 K b M ϕ t M 1 1 + K b ϕ t M 2 E Y 1 Y 2 + α 3 1 + K b ϕ t M + α 4 θ t
(27)
d dt E Y 1 Y 2 = A 11 E Y 1 Y 2 + A 12 E Y 2 2 + A 21 E Y 1 2 + A 22 E Y 1 Y 2 = α 2 E Y 1 Y 2 α 1 K a m θ t m 1 1 + K a θ t m 2 E Y 2 2 α 3 K b M ϕ t M 1 1 + K b ϕ t M 2 E Y 1 2 α 4 E Y 1 Y 2
(28)

System size expansion based on Kurtz’s limit theorems

The approximation of the master equation discussed in the preceding section, i.e., system size expansion method, can be derived and stated compactly in a general form based on Kurtz’s limit theorems [1315] under the condition Ω → ∞. First, the master equation, Equation 11, can be written in the following continuous state, gain-loss form [9]:
P n , t t = W n ; n + r P n , t W n r ; n P n r , t d r
(29)
where W(n;n +r) is the transition probability from state n to state n+r per unit time. Both n and r in Equation 29 are now treated as continuous varying vectors. The convergence of the system size expansion procedure relies on two criteria for transition probability rate: small jump and slow varying [9]. Mathematically, the small-jump criterion implies that there is a small δ so that
W n ; n + r 0 for r > δ
(30)
and the slow varying assumption means that there is a small δ so that
W n + Δ n ; r W n ; r for Δ n < δ
(31)
To satisfy these criteria, the unit jumps associated with the mutually exclusive events in the formulation of the master equation are replaced by jumps of size Ω−1, the system size or the largeness parameter. Thus, the random vector N(t) = (n1(t), n2(t)) is replaced by N ˜ t = N t Ω and time is replaced by t ˜ = t Ω . The resultant master equation of Equation 29 becomes
P n ˜ , t t ˜ = W ˜ n ˜ ; n ˜ + r P n ˜ , t W ˜ n ˜ r ; n ˜ P n ˜ r , t d r
(32)
Comparing Equations 10 and 29 yields the transition probability per unit time for the current problem can be stated in the following form:
W ˜ n ˜ ; n ˜ + r Ω = Ω W n ; n + r Ω = Ω α 1 1 + K a n ˜ 2 m δ n ˜ 1 , 1 Ω δ n ˜ 2 + α 2 n ˜ 1 δ n ˜ 1 , 1 Ω δ n ˜ 2 + α 3 1 + K b n ˜ 1 M δ n ˜ 1 δ n ˜ 2 , 1 Ω + α 4 n ˜ 2 δ n ˜ 1 δ n ˜ 2 , 1 Ω
(33)

where δ(n) and δ i,j are Dirac and Kronecker delta functions, respectively. The four parameters on the right-hand side of Equation 33 are obtained from the definitions of transition intensity functions.

Kurtz’s limit theorems state that, as Ω → ∞ with an error of O(lnΩ/Ω), the statistical properties of the master equation, Equation 32, can be approximated by the following Fokker-Planck equation:
P n ˜ , t ˜ t ˜ = i n ˜ i K i 1 n ˜ P n ˜ , t ˜ + 1 2 i , j 2 n ˜ i n ˜ j K i j 2 n ˜ P n ˜ , t ˜ ,
(34)
where the deterministic drift, K i 1 n ˜ , and diffusion coefficients, K i j 2 n ˜ , are
K i 1 n ˜ = r i W ˜ n ˜ ; n ˜ + r d r i
(35)
K i j 2 n ˜ = r i r j W ˜ n ˜ ; n ˜ + r d r i d r j
(36)
Substituting Equation 33 into Equation 35 yields the first moments of W ˜ :
K 1 1 n ˜ = Ω α 1 1 + K a n ˜ 2 m α 2 n ˜ 1 = α 1 1 + K a n 2 m α 2 n 1
(37)
K 2 1 n ˜ = Ω α 3 1 + K b n ˜ 1 M α 4 n ˜ 2 = α 1 1 + K a n 2 m α 2 n 1
(38)
Similarly, substituting Equation 33 into Equation 36 yields the following second moments of W ˜ :
K 11 2 n ˜ = Ω α 1 1 + K a n ˜ 2 m + α 2 n ˜ 1 = α 1 1 + K a n 2 m + α 2 n 1
(39)
K 22 2 n ˜ = Ω α 3 1 + K b n ˜ 1 M + α 4 n ˜ 2 = α 3 1 + K b n 1 M + α 4 n 2
(40)
K 12 2 n ˜ = 0
(41)
K 21 2 n ˜ = 0
(42)
The approximation of the master equation, Equation 12, can be found base on the fact that the Fokker-Planck equation, Equation 34, can be obtained by integrating the following nonlinear Langevin equation in Ito’s interpretation [9]
d n ˜ i d t ˜ = K i 1 n ˜ + η i n ˜ , t ˜ = K i 1 n ˜ + C i n ˜ L t ˜ ,
(43)
where the first term on the right-hand side of the above equation represents the deterministic, or macroscopic characteristic of the process, η i n ˜ , t ˜ denotes a Gaussian white noise having the following means and covariance matrix
E η i n ˜ , t ˜ = 0
(44)
E η i n ˜ , t ˜ η j n ˜ , t ˜ = K i j 2 n ˜ δ t ˜ t ˜
(45)
L t ˜

denotes a Gaussian white noise with a unit strength, and C i (ñ) denotes the effects of interactions of the noise and the system on the random variable. The discontinuity of Gaussian white noise has been the source of evolution of several algorithms in interpreting C i (ñ) during the process, and thus the conversion of a Langevin equation to its Fokker-Plank counterpart. In Ito’s algorithm, the value of C i (ñ) before the arrival of white noise is used in averaging. In Stratonovich’s algorithm, the averaged value of C i (ñ) during the time of noise is used in averaging, which yields an extra term in the macroscopic part of the Fokker-Plank equation. Since L t ˜ is never infinitely sharp and it lasts a finite time, the Ito and Stratonovich’s calculus are more appropriate in modeling internal and external noises, respectively [9].

With this Langevin representation in hand, the equations derived in the last section, i.e., Equations 17, 18, 22, and 23, can be readily obtained. Specifically, substituting Equations 37 and 38 into Equation 43 and ignoring the noise term yields Equations 17 and 18. Since the drift coefficient in a Fokker-Planck equation, matrix A in Equation 21, is the Jacobian matrix of the functions on the right-hand side Equations 17 and 18 [9], Equation 22 can be obtained by taking derivatives. Finally, it is obvious that the elements of the covariance matrix, Equations 39 through 42, are identical to those shown in Equation 23.

System size expansion based on Kurtz’s theorems is substantially simpler than the original procedure proposed by van Kampen [9]. This efficiency was previously utilized by Aparicio and Solari [32] and Chua et al. [33] in their studies of stochastic population dynamics of disease transmission and chemical vapor deposition, respectively.

It should be mentioned that the system size expansion method discussed in this and last sections suffers several limitations. Simulation with the system-size expansion converges to the steady state within its boundary of attraction just like its deterministic counterpart, and it cannot be generate noise-induced transition, as it will be discussed later in the simulation section [9]. The system size expansion near the steady-state boundary of attraction (i.e., away from the steady state) yields noises that are not compatible to those generated from near the steady states [18].

Simulations

The genetic toggle switch model presented in the preceding section has been simulated by two approaches. The first approach relies on the solution of the governing equations for the first and second moments of the random variables derived from the master equations. The second approach resorts to the event-driven Monte Carlo algorithm.

Simulation based on the master equations

To effectively analyze the impact of system parameters, the equations governing the first and second moments are converted to dimensionless forms. Following Gardner’s procedure [3], we introduce the following variables, with the assumption α2 = α4:
u = ϕ t K b 1 / M
(46)
v = θ t K a 1 / m
(47)
t ¯ = t α 2 = t α 4
(48)
Substituting these three variables into Equations 17 and 18 yields the following compact set of equations:
du d t ¯ = α 1 1 + v m u
(49)
dv d t ¯ = α 2 1 + u M v
(50)
where
α 1 = α 1 K b 1 / M α 2
(51)
α 2 = α 3 K a 1 / m α 4
(52)
When the effective rates of synthesis of the two proteins are comparable, we have
K a 1 / m K b 1 / M
(53)
Then Equations 24 through 28 can be transformed into the following compact forms
d d t ¯ E Y 1 = E Y 1 α 1 m v m v 1 + v m 2 E Y 2
(54)
d d t ¯ E Y 2 = α 2 M u M u 1 + u M 2 E Y 1 E Y 2
(55)
d d t ¯ E Y 1 2 = 2 E Y 1 2 α 1 m v m v 1 + v m 2 E Y 1 Y 2 + α 1 1 + v m + u
(56)
d d t ¯ E Y 2 2 = 2 E Y 2 2 α 2 M u M u 1 + u M 2 E Y 1 Y 2 + α 2 1 + u M + v
(57)
d d t ¯ E Y 1 Y 2 = E Y 1 Y 2 α 1 m v m v 1 + v m 2 E Y 2 2 α 2 M u M u 1 + u M 2 E Y 1 2 E Y 1 Y 2
(58)

Equations 49, 50, and 54 through 58 can be integrated simultaneously to obtain the statistical characteristics of the dynamical processes. Equations 49 and 50 yield the means of the populations while Equations 54 and 55 yield the means of the fluctuations, which are essentially zero due to the assumption of symmetric noises around the means, i.e., Equations 13 and 14. Equations 56 through 58 generate the variance and covariance of the two constituent populations. The integration was conducted in Matlab by ode45, a subroutine based on Gear’s method for stiff sets of ordinary differential equations.

As we will demonstrate later, some of the simulation results, including noise-induced transitions, depend on the parameter values and initial conditions, which, in turn, are closely related to the properties of the deterministic system, i.e., Equations 49 and 50. For a nonlinear system governed by Equations 49 and 50, the location of the parameters α 1 and α 2 in the bifurcation diagram and the initial population in the phase diagram have significant effects on the evolution of system’s state. In order to analyze the process under selected conditions, the values of the four parameters used for simulation, α 1 , α 2 , m, and M, are taken from published experimental results [3, 5, 34, 35] as well as the inference that can be drawn from the phase and bifurcation diagrams. A thorough review of the protein and mRNA reaction rates involved in the control mechanism can be found in Santallin and Mackey [36]. The values of several of these variables can also be found in other regulatory modeling literature [7, 3739]. As shown in Figure 3, for m = M = 2 and α 1 = α 2 = 15.6 the traces of (u, v) by setting the right-hand sides of Equations 49 and 50 being zero yield with three interceptions. Liapunov stability analysis reveals that two of these steady states are stable, and the one in the middle is unstable, i.e., a saddle node. The bifurcation analysis, for m = M = 2, illustrates that the system has one or two stable steady states depending on the values of α 1 and α 2 (see Figure 4 and [3]).
Figure 3

Phase plane drawn by setting right-hand side of Equations 49 and 50 be zero for m=M= 2, α 1 = 15.6, and α 2 = 15.6. The diagram shows three steady states. Stability analysis reveals that two of them are stable and one unstable.

Figure 4

Bifurcation diagram determined for large values of α 1 and α, constructed by taking slopes of lines as m and 1/ M with m=M= 2. The following three representative sets of parameters are chosen for the detailed simulations: Case A, in bistable region (α″ = 15.6 and α 2 = 15.6); Case B, on bifurcation curve ( α 1 = 15.6 and α 2 = 4.0); Case C, in monostable region ( α 1 = 15.6 and α 2 = 1.2).

As marked in Figure 4, three possible sets of α 1 and α 2 are sufficient to characterize the different cases of population dynamics: monostable, bistable, and bifurcation. Thus, the following three sets of parameter values are chosen in our simulations for characterizing the dynamics in different regions:

Case A, in bistable region: α 1 = 15.6 and α 2 = 15.6,

Case B, on bifurcation curve: α 1 = 15.6 and α 2 = 4.0,

Case C, in monostable region: α 1 = 15.6 and α 2 = 1.2.

We assume m = 2 and M = 2 for all the simulations presented herein.

Initial protein populations are also important to the evolution of the dynamics in several aspects. It is established in nonlinear dynamics that different initial conditions could lead to different steady states, and the evolution of the dynamics may be altered significantly by small variation of initial conditions. In this work, we will demonstrate that noise could induce system transition from one steady state to another when the populations pass through the neighborhood of an unstable steady state (or the saddle node) of a bistable system. Moreover, for very small initial populations, the numerical equations become invalid as the protein values tend to become so small that they drive the bifurcation lines beyond the domain of application. Thus, the choice of initial population should be such that it is between 10s and 100 s. In the present work, the initial populations are chosen u(0) = 155 and v(0) = 154 for the three cases discussed above. To further illustrate the effects of the initial populations, a simulation is conducted with u(0) = 15, v(0) = 155, α 1 ' ' = 15.6, and α 2 = 15.6 for the bistable system, or Case A. The population trajectories do not pass through the neighborhood of the saddle node in this simulation, and possess no risk of noise-induced transition.

It should be mentioned that the process of interest is characterized by the transition intensity functions, k1, k2, α2, and α4, defining the probabilities of transitions of each type of population per unit time. If the fraction of population converted per unit time is taken to represent the intensity function, its significance is equivalent to the deterministic rate constant of the specific rate. In other words, from the change in the population of a particular protein type i due to the conversion of type i during the time interval, (t, t + Δt), we have
R i = lim Δt 0 n i n i n i λ i Δt + o Δt Ω Δt = λ i n i Ω ,
(59)

where Ω stands for the system size, i.e., the total initial population; and − R i , the population converted attributable to transition type i protein per unit time. A detailed discussion of the relationship between the deterministic rate constant and the intensity function can be found in [9].

Simulation based on Monte Carlo simulation

Linear or nonlinear dynamic processes have been simulated either deterministically or stochastically by Monte Carlo procedures. It is worth noting that a well-developed class of Monte Carlo simulation procedures essentially shares identical computational bases with the master equation algorithm presented in the preceding sections. Specifically, the assumptions of Markov property and temporal homogeneity of the random variables lead to the definitions of transition intensity functions [33, 40, 41]. As discussed in the “Model Formulation” section, probability balances of various events on the basis of these intensity functions give rise to the master equations. In the Monte Carlo simulation, the system’s state is simulated by a step-wise, random-walk scheme based on the same intensity functions.

Process systems or phenomena can be simulated by time-driven and event-driven Monte Carlo procedures [42]. The difference between these two procedures is in the manner of updating the time clock of the evolution of the system. The time-driven procedure advances the simulation clock by a pre-specified time increment, t, which is sufficiently small so that at most, one event will occur in this interval. The probability of an event occurring is determined by the nature and magnitudes of the transition intensity functions. In contrast, the event-driven procedure updates the simulation clock by randomly generating the waiting time, τ w , which has an exponential distribution [43, 44]; this distribution signifies that a population transition takes place completely randomly. At the end of each waiting interval, one event will occur, and the state to which the system will transfer is also determined by the nature of the transition intensity functions.

The process of interest here, i.e., genetic toggle switch, has been simulated by the event-driven procedure; it is usually computationally faster than the time-driven procedure. The simulation starts with a given initial distribution of population; the essential task is to obtain the probability distributions of the protein numbers at any subsequent times. To determine the system transition in each time step, two random numbers are generated for two different purposes. The first random number in (0, 1), i.e., r 1 , is for estimating the waiting time during which a possible transition of the system’s state will take place. The second random number in (0, 1), i.e., r 2 , is for identifying the transition type.

Waiting time

Let T n be the random variable representing the waiting time of the population of the system of interest at state n prior to its transition due to the transformation of a protein production or consumption. τ w is the realization of T n . Moreover, let G n (τ w ) be the probability that no transition takes place during τ w . Thus,
G n τ w = P r τ w T n .
(60)
This can be expressed as (see derivation in Appendix 2)
G n τ w = exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(61)
The complement of G n (τ w )
H n τ w = 1 G n τ w
(62)
expresses the cumulative probability distribution of T n up to τ w . The probability density function of T n , i.e.,
h τ w d H n τ w d τ w .
(63)
Therefore, h n (τ w ) has the following exponential form (see Appendix 2)
h n τ w = n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(64)

Note that H n (τ w ) is the probability function of T n .

Equation 64 indicates that to estimate the waiting time of a protein-regulated gene expression, τ w , a sequence of exponentially distributed random numbers must be generated. The sequences of the computer-generated random numbers, however, are usually uniformly distributed in interval [0, 1]. This uniform distribution, therefore, need be transformed into the exponential distribution, which can be accomplished by defining a new random variable, denoted by U, whose realization, denoted by u, assumes the value of H n (τ w ) at τ w [43, 44], i.e.,
u = H n τ w = 1 exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(65)
or, inversely,
τ w = 1 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 ln 1 u
(66)

It can be verified that if the waiting time, T n , whose realization is τ w , is exponentially distributed, then the random variable, U, whose realization is u, is uniformly distributed over interval [0, 1], see Appendix 3.

Probabilities of four possible transitions

After residing in state n = (n1, n2) for a waiting time of τ w , the system will transfer to one of its adjacent states. During the process, the transition intensity functions governing the four possible transitions of protein populations from state (n1, n2) to states (n1 − 1, n2), (n1 + 1, n2), (n1, n2 − 1), and (n1, n2 + 1) are α2, k1, α4, and k2, respectively. These transitions are exact equivalents of the transitions from states (n1 + 1, n2), (n1 − 1, n2), (n1, n2 + 1) and (n1, n2 − 1) to state (n1, n2), as shown in Figure 2. These four possible transitions are mutually exclusive events. Moreover, as discussed in the last section, one and only one of the four possible transitions takes place during the waiting time determined by the random number r 1 . Thus, the probability of the system transferring from (n1, n2) to (n1 − 1, n2) is
Q 1 = n 1 α 2 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(67)
The probability of the system transferring from state (n1, n2) to (n1 + 1, n2) is
Q 2 = n 1 α 1 1 + K a n 2 m n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(68)
The probability of the system transferring from state (n1, n2) to (n1, n2 − 1) is
Q 3 = n 2 α 4 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(69)
Similarly, the probability of the system transferring from state (n1, n2) to (n1, n2 + 1) is
Q 4 = n 2 α 3 1 + K b n 1 M n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(70)
Since the sum of Q1 through Q4 is 1, the transition type can be identified by the randomly generated number, r 2 . Specifically, r 2 falling within the interval,
0 , n 1 α 2 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(71)
implies that the population of type-1 protein decreases by 1, see Equation 67; r 2 falling within the interval,
n 1 α 2 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 , n 1 α 1 1 + K a n 2 m + n 1 α 2 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(72)
implies that the population of type-1 protein increases by 1; r 2 falling within the interval,
n 1 α 1 1 + K a n 2 m + n 1 α 2 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 , n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(73)
implies that the population of type-2 protein decreases by 1; r 2 falling within the interval,
n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 , 1
(74)

implies that the population of type-2 protein increases by 1.

Simulation algorithm

The event-driven Monte Carlo procedure is conducted according to Rajamani [40]. A step-wise description of the procedure is given below.
  1. 1.

    Define the initial populations of the two types of proteins, and let the system size, Ω, be the sum of the two protein populations. This Ω will also be the total number of independent simulations to be conducted before taking their statistics. Start the random walk from this point.

     
  2. 2.

    Select the total length of time of each simulation, T f has to be selected. For the current work, T f was chosen to be either 15 or 50 s.

     
  3. 3.

    Determine the length of the waiting time, τ w . First, generate a random number, r 1, from a uniform distribution in [0, 1]; then, calculate τ w , for a system’s transition state n(t) = (n 1(t), n 2(t)) according to Equation 66.

     
  4. 4.

    Update the computer clock by letting t = t + τ w .

     
  5. 5.

    Calculate the transition probabilities that the system will transfer from state n to the other states Q i ’s by Equations 67 through 70. Then, generate another random number r 2, from a uniform distribution in [0, 1]. Determine the transition type by examining in which interval given by Equations 71 through 74 is r 2 located.

     
  6. 6.

    Repeat steps 3 to 5 until the total time exceeds T f ; this terminates one replication of simulation.

     
  7. 7.
    Repeat steps 2 to 6 for Ω times, and store the resultant number in proteins of type i during the j th replication at time t, n ij (t). This yields the mean number of proteins of type i at time t as
    E N i t = j = 1 Ω n ij Ω
    (75)
     
The variance of population of type i at time t can be calculated from its definition, i.e.,
Var N i t = j = 1 Ω n ij E N i t 2 Ω 1
(76)
The covariance around the means between the two types of populations i and j, at time t can be calculated from its definition, i.e.,
Cov N i t , N j t = h = 1 Ω k = 1 Ω n ih E N i t n jk E N j t Ω 1
(77)

As mentioned at the outset of this section, both the Monte Carlo simulation and the simulation based on the master equations adopted in the current work are rooted in the identical set of transition intensity functions derived from the same set of assumptions. Thus, integrating the equations for the first and second moments of the master equations, Equations 54 through 58 for the process, is expected to generate results nearly identical to those from the Monte Carlo simulations, i.e., Equations 75 through 77. Equations 75 to 77 are expected to be nearly identical to Equations 54 to 58 together with 49 to 50.

Results and discussion

The present stochastic analysis of the genetic toggle switch yields the transition probabilities of mutually exclusive events through the definitions of the transition intensity functions of protein production as well as degradation. This analysis renders it possible to formulate the nonlinear master equations of the process as well as to derive the event-driven Monte Carlo simulation. Even though each of these was simulated separately they portrayed interesting analogies.

The stochastic algorithms developed here allow us to analyze the stochastic nature of the two-state toggle switch quantitatively. The master equations governing the numbers of the two types of protein are formulated from stochastic population balance. The stochastic pathways of the two proteins, i.e., their means and the fluctuations around these means, have been numerically simulated independently by the algorithm derived from the master equations, as well as by an event-driven Monte Carlo algorithm. Both algorithms have given rise to the identical results. Moreover, these analyses render it possible to circumvent the possibility of noise-induced transitions.

Simulation based on the master equations

Figures 5, 6, and 7 represent the temporal profiles of the Cases A through C discussed earlier. The left-hand parts of these figures are the exploded portion of the more completed simulations on the right. These simulations were conducted with m = M = 2, α 1 = 15.6, and the same set of initial conditions, u(0) = 155 and v(0) = 154. These initial conditions correspond to a point below the separatrix in the phase diagram, see Figure 3. The value of α 2 varies to illustrate the characteristics of three different cases of dynamics: in the bistable region, on the bifurcation curve, and in the monostable region. The standard deviation envelopes are plotted around the macroscopic trajectories.
Figure 5

Temporal evolution of protein populations with their standard deviation envelopes. These are obtained by the master equation with initial protein concentrations u(0) = 155, v(0) = 154, m = M = 2, α 1 = 15.6, and α 2 = 15.6 (bistable region). The left-hand part of this figure is the exploded portion of the more completed simulations on the right.

Figure 6

Temporal evolution of protein populations with their standard deviation envelopes. These are obtained by the master equation with initial protein concentrations u(0) = 155, v(0) = 154, m = M = 2, α 1 = 15.6, and α 2 = 4.0 (on the bifurcation line). The left-hand part of this figure is the exploded portion of the more completed simulations on the right.

Figure 7

Temporal evolution of protein populations with their standard deviation envelopes. These are obtained by the master equation with initial protein concentrations u(0) = 155, v(0) = 154, m = M = 2, α″ = 15.6, and α 2 = 1.2 (in monostable region). The left-hand part of this figure is the exploded portion of the more completed simulations on the right.

Case A, α 2 = 15.6, represents a bistable system, as marked in the bifurcation diagram in Figure 4. Figure 5 presents the simulated results of this system based on the master equations. As expected, the populations eventually reaches the stable steady state #2 marked in Figure 3 since the initial conditions consist a point below the separatrix, and our analysis of the vector field depicting the flow of dynamics [45] suggests this outcome. The protein populations decrease rapidly and stay in the proximity of the saddle node for a while before they depart for their steady states, an observation consistent with the classical dynamics. During this period, the populations of the two proteins are very similar to each other. The fluctuations around the mean trajectories increase initially from zero and then decrease when they approach the steady states. In a stable system, the standard deviation of the number of either type-1 or type-2 proteins attains the maximum because the state of the system is usually well defined at the outset of the process and the uncertainties decline eventually until it varnishes upon stabilization [46]. The uncertainty in the population of the type-2 protein in Figure 5 appears to remain constant; a special computer experiment was conducted with long simulation time to ensure that it indeed decreases over time.

The formulator is often confronted with a myriad of interacting factors related to a gene’s expression mechanisms before settling on a strategy to assess their impact. A mathematical description of this complex process usually relies on a manageable number of system variables. This lumping procedure inevitably results in a high degree of freedom and fluctuations, or uncertainties, in the predictions of populations of discrete systems [9]. The behavior of an individual protein molecule in a discrete system with such a high degree of freedom is thus difficult to predict even when the system is monitored experimentally. The parameters in the equations, e.g., the transition intensity functions of the master equation algorithm adopted here, are presumed to depend only on the major variables of the system and to be independent of the variables of secondary importance. Neglecting these secondary variables is, in essence, the source of internal, or system, or minimal noises that should be appropriately analyzed stochastically. Thus, the internal noises caused by the discrete nature of a system are inherent in the system and they govern the minimum scattering expected of the random variable of interest. The experimentally observed scattering should always be larger than the predicted one induced by internal noises because of inevitable external noises attributable to experimental errors and imprecision of measuring devices. This implies that it is worth cautioning ourselves not to replicate the experiments excessively in an attempt to reduce the scattering far beyond what is predicted. It is interesting to note that fluctuations reported by Gardner et al. are significantly higher than what master equations will predict. The number of culture used in their fluorescence analysis was 40,000, and the actual number of culture in the sample is much larger than this number. Therefore, the noise levels reported by Gardner et al. [3] certainly involve not internal, but also external noises. External noises are the fluctuations created in an otherwise deterministic system by the application of a random force, whose stochastic properties are supposed to be known [9].

The two proteins do not have well-defined states as a deterministic model depicts when they pass the saddle node. Instead, their populations are probabilistically distributed. The two proteins have not only similar populations but also similar uncertainties in their populations. In fact, as shown in the left-hand side of Figure 5, the uncertainties in their populations are in the same order of magnitude. These characteristics imply that there is a high probability that the relative sizes of the two protein populations are switched when the system approaches the unstable steady state. This switch brings the populations to the region above the separatrix in the phase diagram in Figure 3, and the vector field in that region eventually leads the process to the steady state #1 marked in the same figure. The noise-induced phase transition has been examined in detail by Nicolis and Turner [47], Malek Mansour et al. [48], and Horsthemke and Lefever [49]. Nicolis and Turner have shown that the fluctuations enhanced at a ‘critical point’ (populations closest to the instable steady state); the variances are of the order of Ω−1/2, a result consistent with that derived by van Kampen for expanding the master equation by system size expansion. Thus, systems with low populations are more subjective to noise-induced transitions. The noise enhancement near the instability is illuminated in Figure 5. Once the system moves away from the instability, the noises decrease and noise-induced transition becomes more difficult. Internal fluctuations do not change the local stability of the system, and the position of transition points is in no way modified by the presence of these fluctuations.

It should be mentioned that the parameter values for our simulation are carefully chosen to illustrate the possibility of noise-induced transition. Gardner et al. [3] did not observe this possibility probably because the populations of their system are very large and the difference between the two protein populations at the critical point is large, as discussed earlier.

Case B, α 2 = 4.0, represents system on the bifurcation line, as marked in the bifurcation diagram in Figure 4. Gardner [3] has a good exposition on the dependence of bifurcation diagram and phase diagram on the parameters. There are two steady states on the phase diagram, similar to Figure 3; one is stable and the other, unstable. Figure 6 presents the simulated results of this system based on the master equations. Similar to Case A, the populations eventually reach the stable steady state. The populations do not stay in the vicinity of the saddle node for a long time as they are in Case A. Although the two protein populations are very close to each other and the fluctuations are of the same order of the populations during this time period, one steady state characteristic guarantees the system’s final destination.

Case C, α 2 = 1.2, is a monostable system, as marked in the bifurcation diagram in Figure 4. Figure 7 presents the simulated results of this system based on the master equations. Similar to Case B, the populations eventually reach the stable steady state. Although the two protein populations have maximal fluctuations during the evolution of the dynamics, but they eventually vanish to zero.

Figure 8 presents the results from a simulation very similar to Case A. It uses the identical set of parameters for Case A and with a slightly different set of initial conditions: u(0) = 14 and v(0) = 154. It is a bistable system and the initial conditions represent a point above the separatrix in Figure 4. As expected, the process eventually reaches the steady state #1 shown in Figure 3. Unlike Case A, however, this dynamics does not pass through the proximity of the saddle node, and the fluctuations around the means do not permit easy switching between the two populations, see Figure 8.
Figure 8

Temporal evolution of protein populations with their standard deviation envelopes. These are obtained by the master equation with initial protein concentrations u(0) = 14, v(0) = 154, m = M = 2, α 1 = 15.6, and α 2 = 15.6 (bistable region). Comparison of the results here and those from Figure 4 suggests that the initial conditions determine the ultimate steady state for a bistable system. The left-hand part of this figure is the exploded portion of the more completed simulations on the right.

Simulation based on Monte Carlo procedure

Monte Carlo simulations have yielded results essentially indistinguishable from those generated from the master equations. This is expected since the algorithms based on the event-driven Monte Carlo procedure and master equations derived in the present work are rooted in identical assumptions, i.e., the Markov property and temporal homogeneity of the random variables. These assumptions lead to the definitions of transition intensity functions that are the cornerstones of the formulation of the master equations and of the Monte Carlo procedure.

The fact that the two algorithms have yielded essentially the same results implies that both indeed define the evolution of dynamic process in a precisely equivalent way. The master-equation algorithm generates the equations governing the statistical moments of the process, which can be readily varied to cover a wide range of initial conditions, whereas the Monte Carlo procedure will require far more computational time and storage space under such circumstances.

Internal noise-induced transition was clearly observed during Monte Carlo simulation for Case A. Figure 9 demonstrates the two traces from two independent Monte Carlo simulations with parameters and initial conditions identical to those for Case A. These two independent Monte Carlo simulations result in two different steady states that is a consequence of internal noise-induced transition. It should be mentioned that results based on master equation, as shown in Figure 5, represent an averaged outcome of independent Monte Carlo simulations of Ω times (Ω = 155 + 154 = 309 for this case), which are indeed observed in our simulation experiments. As mentioned in the last section, systems of small populations are susceptible to large internal fluctuations (or uncertainties) in the evolution of their dynamics. The evolutions of protein statistics shown in Figure 5 also illustrate the large uncertainties after the populations enter the proximity of the saddle node. In fact, the uncertainty is of the same magnitude as the mean number of particles. Internal fluctuations are inherent characteristics of discrete systems that are beyond the regulation of external means. The results on the right-hand side of Figure 9 show a clear transition in protein numbers in a particular Monte Carlo simulation. The transition takes place soon after the populations enter the proximity of the saddle node. It is caused by the fact that the populations of both proteins are low and, therefore, there are susceptible to large internal fluctuations and noise-induced transitions. Noise-induced transition has been discussed by Nicolis and Turner [47], Malek Mansour et al. [48], and Horsthemke and Lefever [49].
Figure 9

Temporal evolution of protein populations from two different simulations of the Monte Carlo procedure. Initial protein concentrations: u(0) = 155, v(0) = 154, m = M = 2, α 1 = 15.6, and α 2 = 15.6 (in bistable region). Results show the noise-induced transition from one steady state to another takes place when the process passes the metastable region.

As mentioned in the ‘Introduction’ section, the master equation and its system-size expansion suffers a few limitations. One of such limitations is that the algorithm is valid for the dynamics well within the boundary of attraction [9]. For a bistable dynamics staring in a region outside this boundary, such as Case A, the Monte Carlo simulation converges to two possible steady states. The master equation algorithm converges to only one.

Some comparisons of the three algorithms are worth mentioning. The governing equations for the system size expansion can be derived in a straightforward manner, though the detailed derivations may be cumbersome and time consuming. It requires only a minor transformation of variable for some unstable stochastic processes, such as the diffusion process, well beyond the initial transient period [9]. Unlike the Monte Carlo simulation, the derived moment equations can be repeatedly integrated for different sets of parameters and initial conditions. Consequently, system size expansion has been widely adopted in the derivation of governing equation of stochastic processes governed by internal noises.

Kurtz’s algorithm is highly compact and convenient. The implementation of the rigorous Kurtz algorithm requires knowledge about the relations among master, Langevin, and Fokker-Planck equations. It allows direct derivation of the equations governing the moments. However, the algorithm merely describes the dynamics in the initial transient period of unstable systems for selected processes, such as the diffusion process [9].

The Monte Carlo method is easy to implement because it bypasses all derivations of equations. It is most efficient when the number of random variables is large and the master equation is difficult to derive. Repeated simulations have to be carried out for different sets of parameters and initial conditions. The required computational time and disk space are usually high.

Conclusions

The current model adopts the essential concepts of a nonlinear toggle switch model for analyzing a protein-regulated system. The master equation algorithm, along with its system size expansion, involves the stochastic probability balance of the two types of populations. The resultant master equation should yield not only the deterministic evolution of protein populations during gene expression, but also the fluctuations, or uncertainties inherited in the prediction or measurement. Kurtz’s limit theorems significantly reduce the complex and laborious exercise of system size expansion. In fact, they will be indispensable tools for the analysis of really complex genetic networks.

The validity of the model is amply demonstrated by numerically calculating the evolution of population of both types and their fluctuations over time through two simulation algorithms, one based on the master equations and the other based on the event-driven Monte Carlo procedure. These two algorithms are implemented totally independently of each other but with the same set of system parameters, i.e., the transition intensity functions. Hence, it is indeed remarkable that the two algorithms have yielded essentially identical results.

Both simulation results demonstrate the possibility of noise-induced transition when the dynamics passes through the proximity of the saddle node. It happens when the protein populations are low and the noises are in the same order of magnitudes as the populations. This property may have practical applications in developing gene therapy, cell cycle control, and protein sensors.

Nomenclature

E, one-step operator; N 1 , random variable representing population of repressor 1; N 2 , random variable representing population of repressor 2; n 1 , realization of random variable, N 1 (t); n 2 , realization of random variable, N 2 (t); N, random vector, i.e., [N 1 (t),N 2 (t),N 3 (t)]; n, realization of random vector N (t); P n , probability that the system is at state n at time t; t, time; Y, random variable denoting the fluctuations about macroscopic behavior; y, realization of random variable Y fluctuations; Q, the transition probability; u, Gardner’s concentration of repressor 1; v, Gardner’s concentration of repressor 2; K1, effective reaction rate for repressor 1 formation; K2, effective reaction rate for repressor 2 formation.

Greek letters

α1, the rate of production of repressor 1; α2, the rate of production of repressor 2; α3, the rate of degradation of repressor 1; α4, the rate of degradation of repressor 2; α 1 , the effective rate of synthesis of repressor 1 on system size; α 2 , the effective rate of synthesis of repressor 2 on system size; , macroscopic number of repressor 1; θ, macroscopic number of repressor 2; τ w , the waiting time; λ, transition intensity function; Ψ, joint probability distribution in terms of random vector Y ; Ω, total number of repressors or system size; β, the multimerization constant of repressor 1; γ, the multimerization constant of repressor 2; Θ, the vector representing the two mean numbers of proteins in system-size expansion.

Subscripts

1, repressor 1; 2, repressor 2

Appendices

Appendix 1: system-size expansion

The constituent populations at any time in the genetic toggle switch system can be represented by the random vector N(t) = [N1(t), N2(t)], their mean values can be taken as a deterministic vector Θ(t)  =  [ϕ(t),  θ(t)] and their fluctuations can be taken as another random vector given by Y (t), where Y(t) = [Y1(t),  Y2(t)]. As stated in Equations 13 and 14 in the text:
N 1 t = Ω ϕ t + Ω 1 / 2 Y 1 t
(78)
N 2 t = Ω θ t + Ω 1 / 2 Y 2 t
(79)
Their realizations of these expressions are given, respectively, by
n 1 t = Ωϕ t + Ω 1 / 2 y 1 t
(80)
n 2 t = Ωθ t + Ω 1 / 2 y 2 t
(81)

Accordingly, the joint probability of n1 and n2, P n (t), is now transformed into that of y1 and y2, i.e., Ψ y (t).

Recall that in the context of deriving the master equation, the state or dependent variable of interest is the joint probability of the population distribution, P n (t), and the realization of random variables at time t, i.e., n1 and n2, are invariant with respect to time. Consequently, the time derivatives of Equations 80 and 81 are, respectively,
d y 1 dt = Ω 1 / 2 dt
(82)
d y 2 dt = Ω 1 / 2 dt
(83)
For the convenience of the subsequent expansion of the master equation, Eq. 11 in the text is restated below
d P n 1 , n 2 t dt = α 1 1 + K a n 2 m Ε n 1 1 1 P n 1 , n 2 t + Ε n 1 1 α 2 n 1 P n 1 , n 2 t + α 3 1 + K b n 1 M Ε n 2 1 1 P n 1 , n 2 t + Ε n 2 1 α 4 n 2 P n 1 , n 2 t
(84)
Substituting Equations 82 and 83 into the right-hand side of the above expression yields
d P n dt = d Ψ y dt = Ψ y t + Ψ y y 1 d y 1 dt + Ψ y y 2 d y 2 dt = Ψ y t Ψ y y 1 Ω 1 / 2 dt Ψ y y 2 Ω 1 / 2 dt
(85)
Without causing confusion, the subscript y of Ψ y (t) is eliminated in the subsequent discussion. The step operators, Ε n 1 and Ε n 1 1 , convert n1 to n1 + 1 and n1 − 1, respectively. Similarly, Equation 80 suggests that Ε n 1 shifts y1 to y1 + Ω− 1/2. Therefore, the operations of step operators in Equation 84 are equivalent to evaluating the values of target functions at shifted points through the following Taylor series expansions, i.e.,
Ε n 1 = 1 + Ω 1 / 2 y 1 + Ω 1 2 2 y 1 2 +
(86)
Ε n 1 1 = 1 Ω 1 / 2 y 1 + Ω 1 2 2 y 1 2
(87)
Ε n 2 = 1 + Ω 1 / 2 y 2 + Ω 1 2 2 y 2 2 +
(88)
Ε n 2 1 = 1 Ω 1 / 2 y 2 + Ω 1 2 2 y 2 2
(89)
Substituting Equations 85 through 89 into 84 yields
Ψ t Ω 1 2 dt Ψ y 1 Ω 1 2 dt Ψ y 2 = α 1 1 + K a Ωθ t + Ω 1 2 y 2 t m 1 Ω 1 2 y 1 + Ω 1 2 2 y 1 2 1 Ψ + α 2 1 + Ω 1 2 y 1 + Ω 1 2 2 y 1 2 + 1 Ωϕ t + Ω 1 2 y 1 t Ψ + α 3 1 + K b Ωϕ t + Ω 1 2 y 1 t M 1 Ω 1 2 y 2 + Ω 1 2 2 y 2 2 1 Ψ + α 4 1 + Ω 1 2 y 2 + Ω 1 2 2 y 2 2 + 1 Ωθ t + Ω 1 2 y 2 t Ψ
(90)
In order to collect the terms of same power of Ω in the subsequent expansion, the Ω dependence of the parameters in the above equation have to be examined and converted to their Ω -independent counterparts. The definitions of α1 and α3 in the ‘Model formulation’ section suggest that they are proportional to the system size, Ω, i.e.,
α 1 = α 1 Ω , α 3 = α 3 Ω ,
(91)
where α 1 and α 3 are independent of the system size, Ω. Moreover, the definitions of equilibrium constants, K a and K b , for the gene repression, G  +  m R    GR m , in the beginning of the ‘Model formulation’ section suggest
K a = K a Ω m , K b = K b Ω M ,
(92)

where K a and K b are independent of the system size, Ω.

Substituting Equations 91 and 92 into Equation 90 gives
Ψ t Ω 1 2 dt Ψ y 1 Ω 1 2 dt Ψ y 2 = α 1 Ω 1 + K a Ω m Ωθ t + Ω 1 2 y 2 t m Ω 1 2 y 1 + Ω 1 2 2 y 1 2 Ψ + α 2 Ω 1 2 y 1 + Ω 1 2 2 y 1 2 + Ωϕ t + Ω 1 2 y 1 t Ψ + α 3 Ω 1 + K b Ω M Ωϕ t + Ω 1 2 y 1 t M Ω 1 2 y 2 + Ω 1 2 2 y 2 2 Ψ + α 4 Ω 1 2 y 2 + Ω 1 2 2 y 2 2 + Ωθ t + Ω 1 2 y 2 t Ψ
(93)
The first and third terms on the right-hand side of the above expression can be expanded in power of Ω through known power and binomial expansions. Specifically, for small K a Ω m Ωθ t + Ω 1 2 y 2 t m , we have
1 1 + K a Ω m Ωθ t + Ω 1 2 y 2 t m = 1 K a Ω m Ωθ t + Ω 1 2 y 2 t m + K a Ω m Ωθ t + Ω 1 2 y 2 t m 2 + = 1 K a Ω m i = 0 m m i Ωθ t m i Ω 1 2 y 2 t i + K a Ω m 2 i = 0 2 m 2 m i Ωθ t 2 m i Ω 1 2 y 2 t i K a Ω m 3 i = 0 3 m 3 m i Ωθ t 3 m i Ω 1 2 y 2 t i +
(94)
where m i denotes a binomial coefficient. Lumping the terms of the same power of Ω in the above expansion gives
1 1 + K a Ω m Ωθ t + Ω 1 2 y 2 t m = 1 K a Ω m Ω m θ m + m Ω m 1 θ m 1 y 2 Ω 1 2 + + K a 2 Ω 2 m Ω 2 m θ 2 m + 2 m Ω 2 m 1 θ 2 m 1 y 2 Ω 1 2 + + K a 3 Ω 3 m Ω 3 m θ 3 m + 3 m Ω 3 m 1 θ 3 m 1 y 2 Ω 1 2 + + = 1 K a θ m + K a 2 θ 2 m K a 3 θ 3 m + Ω 0 K a θ m 1 2 K a 2 θ 2 m 1 + 3 K a 3 θ 3 m 1 + m y 2 Ω 1 2 + = 1 1 + K a θ m Ω 0 K a θ m 1 1 + K a θ m 2 m y 2 Ω 1 2 +
(95)
Substituting the above expression into the first term on the right-hand side of Equation 93 yields
α 1 Ω 1 + K a Ω m Ωθ t + Ω 1 2 y 2 t m Ω 1 2 y 1 + Ω 1 2 2 y 1 2 Ψ = α 1 Ω 1 1 + K a θ m Ω 0 K a θ m 1 1 + K a θ m 2 m y 2 Ω 1 2 +
Ω 1 2 Ψ y 1 + Ω 1 2 2 Ψ y 1 2 + = α 1 1 + K a θ m Ψ y 1 Ω 1 2 + α 1 K a θ m 1 m y 2 1 + K a θ m 2 Ψ y 1 + α 1 2 1 1 + K a θ m 2 Ψ y 1 2 Ω 0 +
(96)
The expansion of the second term on the right-hand side of Equation 93 gives
α 2 Ω 1 2 y 1 + Ω 1 2 2 y 1 2 + Ωϕ t + Ω 1 2 y 1 t Ψ = α 2 ϕ Ψ y 1 Ω 1 2 + α 2 Ψ + ϕ 2 2 Ψ y 1 2 + y 1 Ψ y 1 Ω 0 +
(97)
Following the same procedure, the third and fourth terms on the right-hand side of Equation 93 can be expanded into the following power series of Ω 1 2 , respectively:
α 3 Ω 1 + K b Ω M Ωϕ t + Ω 1 2 y 1 t M Ω 1 2 y 2 + Ω 1 2 2 y 2 2 Ψ = α 3 1 + K b ϕ M Ψ y 2 Ω 1 2 + α 3 K b θ M 1 M y 1 1 + K b ϕ M 2 Ψ y 2 + α 3 2 1 1 + K b ϕ M 2 Ψ y 2 2 Ω 0 +
(98)
and
α 4 Ω 1 2 y 2 + Ω 1 2 2 y 2 2 + Ωθ t + Ω 1 2 y 2 t Ψ = α 4 θ Ψ y 2 Ω 1 2 + α 4 Ψ + θ 2 2 Ψ y 2 2 + y 2 Ψ y 2 Ω 0 +
(99)
Substituting Equations 96 through 99 into Equation 93 and collecting the terms of order Ω 1 2 on both sides of the Equation 100 yields the macroscopic, or the deterministic, equations:
dt = α 1 1 + K a θ t m α 2 ϕ t
(100)
dt = α 3 1 + K b ϕ t M α 4 θ t
(101)
They are the Equations 17 and 18 in the text. By collecting terms of order Ω0 yields
Ψ t = α 1 1 + K a θ t m + α 2 ϕ t 1 2 2 y 1 2 Ψ + α 3 1 + K b ϕ t M + α 4 θ t 1 2 2 y 2 2 Ψ + y 1 α 2 y 1 Ψ + α 1 K a m θ t m 1 y 2 Ψ 1 + K a θ t m 2 + y 2 α 4 y 2 Ψ + α 3 K b M ϕ t M 1 y 1 Ψ 1 + K b ϕ t M 2
(102)
This equation can be rearranged in a linear Fokker-Plank equation form [9] as follows:
Ψ t = i , j A i j y i y j Ψ + 1 2 i , j B i j 2 Ψ y i y j
(103)
or
Ψ t = A 11 y 1 y 1 Ψ A 12 y 1 y 2 Ψ A 21 y 2 y 1 Ψ A 22 y 2 y 2 Ψ + 1 2 B 11 2 Ψ y 1 2 + 1 2 B 22 2 Ψ y 2 2 ,
(104)
where
A = A 11 A 12 A 21 A 22 = α 2 α 1 K a m θ t m 1 1 + K a θ t m 2 α 3 K b M ϕ t M 1 1 + K b ϕ t M 2 α 4
(105)
B = B 11 B 12 B 21 B 22 = α 1 1 + K a θ t m + α 2 ϕ t 0 0 α 3 1 + K b ϕ t M + α 4 θ t
(106)

Equations 103, 104, and 105 are Equations 21, 22, and 23 in the text, respectively.

Appendix 2: distribution functions of waiting time

Equation 9 in the text indicates that the probability of no transition in the small time interval, (τ w ,  τ w  + Δτ w ), is
G n Δ τ w = 1 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 Δ τ w + o Δ τ w
(107)
The Markov property implies that succeeding time intervals, (0, τ w ) and (τ w , τ w  + Δτ w ), are independent of each other (30), thus
G n τ w + Δ τ w = G n τ w G n Δ τ w + o Δ τ w = G n τ w 1 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 Δ τ w + o Δ τ w
(108)
Rearranging the above equation yields
G n τ w + Δ τ w G n τ w = G n τ w n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 Δ τ w + o Δ τ w
(109)
Dividing the expression by taking the Δτ w and taking the limits as Δτ w  → 0 gives
G n τ w d τ w = G n τ w n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4
(110)
By integrating this equation, with constant n, subject to the initial condition
G n 0 = 1
(111)
we obtain
G n τ w = exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(112)

Chapter 14 of the book by Karlin and Taylor [43, 44] contains a more rigorous proof of Equation 112.

From the definition of G n (τ w ), i.e., Equation 60 in the text, one can obtain a cumulative distribution of the form
G n τ w = P r T n τ w = 1 H n τ w = 1 0 τ w h n τ w
(113)
Since H n (τ w ) is the cumulative probability distibution form 0 to τ w , i.e. the probability function of T n at τ w , h n (τ w ), which is the derivative of H n (τ w ) with respect to τ w , is the probability-density function of T n . Hence,
d G n τ w d τ w = h n τ w = d H n τ w d τ w
(114)
In light of Equations 113 and 114 give rise to
h n τ w = n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(115)

Equations 112 and 115 are the Equations 61 and 64 in the text, respectively. The latter signifies that the probability density function for the population to make the next transition is exponentially distributed.

Appendix 3: random number transformation

For convenience, Equation 65 in the text is reiterated below
u = H n τ w = 1 exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(116)

Obviously, u = 0 at τ w  = 0 and u = 1 when τ w  → . We are to prove that the random variable, U, whose realization u, is uniformly distributed in [0, 1].

Since u increases with τ w monotonically according to Equation 116, there is a one-to-one correspondence between u and τ w , thereby leading to the expression that can be visualized as signifying that the probabilities of the same event in two different domains are identical; this expression is
h n τ w d τ w = f u du
(117)
where h n (τ w ) and f(u) are the probability-density functions of T n and U respectively. For transforming a random variable, an equivalent form of Equation 117 is often given to account for a random variation in either positive or negative direction as follows [41]:
f u = h τ w w du
(118)

Equations 117 and 118 signifies that the probability-density functions of T n , i.e., h n (τ w ), is transformed into that of U, i.e., f(u), such that the probability represented by h(τ w )| w | and that represented by f(u)du are identical.

The function h(τ w ) derived from Appendix 2 and given in the text as Equation 64 is
h n τ w = n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w
(119)
Solving Equation 116 for τ w gives
τ w = 1 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 ln 1 u
(120)
This is the Equation 66 in the text. By differentiating Equation 120 with respect to u we obtain
d τ w du = 1 n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 1 u
(121)
Substituting Equations 119 through 121 into Equation 118 leads to
f u = exp n 1 α 1 1 + K a n 2 m + n 1 α 2 + n 2 α 3 1 + K b n 1 M + n 2 α 4 τ w 1 1 u = exp ln 1 u 1 1 u = 1
(122)
The values of u is in the interval, [0, 1], as mentioned at the outset; consequently, the probability density function of U, i.e., f(u), remains 1 throughout in the interval, [0, 1] as expressed above. In other words, U, the realization of which is expressed in u, is a random variable uniformly distributed over this interval; for convenience, the probability function or cumulative probability distribution of U is F(u). The relationships among H(τ w ), h(τ w ), F(u), and f(u) are illustrated in Figure 10.
Figure 10

The relations among H ( τ w ), h ( τ w ), F ( u ), and f ( u ).

Declarations

Acknowledgements

Professor Michael Mossing of the Department of Chemistry and Biochemistry of the University of Mississippi provided valuable advices during this study. Assad Mohammed and Oluseye Adeyemi provided valuable technical supports for the completion of this work.

Authors’ Affiliations

(1)
Department of Chemical Engineering, University of Mississippi

References

  1. Ptashne MA: A genetic switch: phage [lambda] and higher organisms. Cambridge, Massachusetts: Cell Press and Blackwell Scientific Publications; 1992.Google Scholar
  2. Oppenheim A, Kobiler O, Stavans J, Adhya S: Switches in bacteriophage lambda development. Annu. Rev. Genet. 2005, 39: 409–429. 10.1146/annurev.genet.39.073003.113656View ArticleGoogle Scholar
  3. Gardner TS, Cantor CR, Collins JJ: Construction of a genetic toggle switch in Escherichia coli . Nature 2000, 403: 339–342. 10.1038/35002131View ArticleGoogle Scholar
  4. McAdams HH, Arkin A: Stochastic mechanisms in gene expression. Proc. Natl. Acad. Sci. 1997, 94: 814–819. 10.1073/pnas.94.3.814View ArticleGoogle Scholar
  5. Arkin A, Ross J, McAdams HH: Stochastic kinetic analysis of developmental pathway bifurcation in phage λ-infected Escherichia coli cells. Genetics 1998, 149: 1633–1648.Google Scholar
  6. Bower JM, Bolouri H: Computational modeling of genetic and biochemical network. Cambridge, Massachusetts: The MIT Press; 2000.Google Scholar
  7. Hasty J, Pradines J, Dolnik M, Collins JJ: Noise based switches and amplifiers for gene expression. Proc. Natl. Acad. Sci. 2000, 97: 2075–2080. 10.1073/pnas.040411297View ArticleMATHGoogle Scholar
  8. Oppenheim I, Shuler KE, Weiss GH: Stochastic processes in chemical physics: the master equation. Cambridge, MA: The MIT Press; 1977.Google Scholar
  9. Van Kampen NG: Stochastic process in physics and chemistry. 2nd edition. Amsterdam, Netherlands: Elsevier; 1992.Google Scholar
  10. Gardiner CW: Handbook of stochastic methods for physics, chemistry, and natural sciences. 2nd edition. Berlin, Germany: Springer-Verlag; 1998.MATHGoogle Scholar
  11. Widder S, Schicho J, Schuster P: Dynamic patterns of gene regulation I: simple two-gene systems. J. Theor. Biol. 2007, 246: 395–419. 10.1016/j.jtbi.2007.01.004MathSciNetView ArticleGoogle Scholar
  12. Van Kampen NG: The expansion of the master equation. Adv. Chem. Phys. 1976, 34: 245–309.Google Scholar
  13. Kurtz TG: Limit theorems and diffusion approximations for density dependent Markov chains. Math. Program. Study 1976, 5: 67–78. 10.1007/BFb0120765MathSciNetView ArticleMATHGoogle Scholar
  14. Kurtz TG: Strong approximation theorems for density dependent Markov chains. Stoch. Process. Appl. 1978, 6: 223–240. 10.1016/0304-4149(78)90020-0MathSciNetView ArticleMATHGoogle Scholar
  15. Fox RF, Keizer J: Amplification of intrinsic fluctuations by chaotic dynamics in physical systems. Phys. Rev. A 1991, 43: 1709–1720. 10.1103/PhysRevA.43.1709MathSciNetView ArticleGoogle Scholar
  16. Kepler TB, Elston TC: Stochasticity in transcriptional regulation: origins, consequences, and mathematical representations. Biophy. J. 2001, 81: 3116–3136. 10.1016/S0006-3495(01)75949-8View ArticleGoogle Scholar
  17. Scott M, Ingallls B, Kærn M: Estimations of intrinsic and extrinsic noise in models of nonlinear genetic networks. Chaos 2006, 16(026107):1–15.Google Scholar
  18. Tao Y, Jia Y, Dewey TG: Stochastic fluctuations in gene expression far from equilibrium: omega expansion and linear noise approximation. J. Chem. Phys. 2005, 122: 124108–124108. 10.1063/1.1870874View ArticleGoogle Scholar
  19. Ito Y, Uchida K: Formulas for intrinsic noise evaluation in oscillatory genetic networks. J. Theor. Biol. 2010, 267: 223–234. 10.1016/j.jtbi.2010.08.025MathSciNetView ArticleGoogle Scholar
  20. Ochab-Marcinek A: Predicting the asymmetric response of a genetic switch to noise. J. Theor. Biol. 2008, 254: 37–44. 10.1016/j.jtbi.2008.04.032MathSciNetView ArticleGoogle Scholar
  21. Yildirim N, Mackey M: Feedback regulation in the lactose operon: a mathematical modeling study and comparison with experimental data. Biophys. J. 2003, 84: 2841–2851. 10.1016/S0006-3495(03)70013-7View ArticleGoogle Scholar
  22. Turcotte M, Garcia-Ojalvo J, Süel GM: A genetic timer through noise-induced stabilization of an unstable state. Proc. Natl. Acad. Sci. U. S. A. 2008, 105: 15732–15737. 10.1073/pnas.0806349105View ArticleGoogle Scholar
  23. Schultz D, Onuchic JN, Wolynes PG: Understanding stochastic simulations of the smallest genetic networks. J. Chem. Phys. 2007, 126: 245102–245102. 10.1063/1.2741544View ArticleGoogle Scholar
  24. Bruggeman FJ, Blthgen N, Westerhoff HV: Noise management by molecular networks. PLoS Comput. Biol. 2009, 5: e1000506. 10.1371/journal.pcbi.1000506MathSciNetView Article