Skip to main content
  • Research article
  • Open access
  • Published:

Improvement of the memory function of a mutual repression network in a stochastic environment by negative autoregulation

Abstract

Background

Cellular memory is a ubiquitous function of biological systems. By generating a sustained response to a transient inductive stimulus, often due to bistability, memory is central to the robust control of many important biological processes. However, our understanding of the origins of cellular memory remains incomplete. Stochastic fluctuations that are inherent to most biological systems have been shown to hamper memory function. Yet, how stochasticity changes the behavior of genetic circuits is generally not clear from a deterministic analysis of the network alone. Here, we apply deterministic rate equations, stochastic simulations, and theoretical analyses of Fokker-Planck equations to investigate how intrinsic noise affects the memory function in a mutual repression network.

Results

We find that the addition of negative autoregulation improves the persistence of memory in a small gene regulatory network by reducing stochastic fluctuations. Our theoretical analyses reveal that this improved memory function stems from an increased stability of the steady states of the system. Moreover, we show how the tuning of critical network parameters can further enhance memory.

Conclusions

Our work illuminates the power of stochastic and theoretical approaches to understanding biological circuits, and the importance of considering stochasticity when designing synthetic circuits with memory function.

Background

Memory is ubiquitous in biological systems [1,2,3,4]. Characterized by a continued response to a transient stimulus [5], cellular memory has been found to aid in the robust control of diverse biological functions such as synaptic plasticity [6], differentiation [7, 8], cell cycle transition [9], or gene regulation [10]. Memory in cellular circuitry is tightly linked to bistability, i.e. the presence of two stable steady states [11, 12]. To this end, memory is achieved if an input signal evokes a switch to an alternative steady state where the system remains over time even after the input signal has disappeared [11, 12].

Our general understanding of design principles of biological networks has improved dramatically during recent years [13]. Feedback loops as elementary components provide common control mechanisms of cellular networks [14]. For instance, negative feedback can facilitate adaptation and oscillation [15], while positive feedback loops, which play pivotal roles in cellular signaling [16], often promote signal amplification, bistable switches [17] and memory [11]. Indeed, a common circuit architecture that is known to give rise to memory is based on interlinked positive feedback network loops [18, 19]. Several experimental and theoretical analyses of such network architectures that can give rise to memory have been reported [7, 12, 20,21,22,23,24,25]. These studies have revealed many general properties underlying successful memory, for instance that ultrasensitivity may be sufficient to generate two stable states [26], and that the transition periods between the bistable states compose an important characteristic of a cellular memory module [19]. In all cases, bistability and memory can convert a transient input signal to a continued cellular response [27, 28]. This property is directly visible from the characteristic hysteresis behavior observed for bistability and memory systems. Experimental examples confirm these findings, such as the presence of hysteresis behavior and bistability in a synthetic genetic circuit in Escherichia coli [18]. Based on these general principles that link network architecture to systems behavior, the blossoming field of synthetic biology has contributed many useful designs of genetic circuits based on a quantitative understanding of biological networks [29,30,31] with myriad implications for biotechnology, biocomputing and gene therapy [13, 26, 32]. However, the rational design of a robust memory function can be implemented via many alternative mechanisms that remain incompletely understood [21, 29].

Importantly, the vast majority of genes in a cell is normally only expressed at very low levels. Due to the inherent randomness of transcription and translation, this gives rise to substantial stochastic fluctuations in cellular mRNA and protein copy numbers of lowly expressed genes [33,34,35,36,37,38,39,40]. Accordingly, the analysis and design of biological circuits cannot be based solely on deterministic properties of the network topology alone. In response, much progress has been made in better understanding the role of stochasticity to network function [41,42,43,44,45]. For instance, careful work has delineated the sources of intrinsic noise in small transcriptional networks [46] and how noise may propagate through network architectures [39, 41], as well as how feedback loops regulate bistability [42, 47] and intrinsic noise [48]. Similarly, extensive work has established general principles underlying the design of genetic circuits that exhibit bistability and memory [11, 42,43,44,45].

Recent examples of opposing behavior further highlight the importance of considering the contribution of stochasticity to cellular circuitry. On the one hand, it has been shown that noise can induce multimodality and even stochastic memory in a system that, according to a deterministic description, lacks bistability [49, 50]. On the other hand, stochastic fluctuations in gene expression levels often reduce or disrupt the memory function of biological networks [21, 51, 52]. Equally importantly, seminal technical and conceptual advances have made strong progress in the efficient and accurate approximation of multivariate nonlinear stochastic systems [53, 54].

Here, we investigate how a negative autoregulation network architecture can improve the sustained memory function of a mutual repression network in a stochastic environment. Our work extends the previously published regulated mutual repression network (MRN) [12] to the regulated mutual repression network with negative autoregulation (MRN-NA) to investigate the effect of a negative autoregulation loop on memory function. The network architecture is characterized by a mutual repression cycle that can give rise to bistability and memory, adopted from two well-characterized mutual repression networks, the system consisting of LacI and TetR in E. coli that displays a bistable gene expression memory module [24], and the mutual repression of the two repressors cI and Cro that yield a bistable memory module in the bacteriophage λ [21].

To explore and compare the memory behavior of the MRN-NA network model, as well as how it compares to the MRN model, deterministic rate equations, stochastic simulations and theoretical analyses of Fokker-Planck equations were employed to identify principles of the robustness of the memory function. We demonstrate how negative autoregulation can reduce intrinsic noise and thus improve the memory function by increasing the stability of the steady states. Negative autoregulation can for instance be achieved by the ability of proteins to bind and sequester their own mRNA. Systematic analyses of successful memory as a function of central model parameters that describe the mutual repression cycle highlight principles and limits of the memory function in these mutual repression networks. Moreover, our work emphasizes the importance of considering stochasticity when designing synthetic circuits with memory function.

Results

To investigate the effect of negative autoregulation and stochasticity on network memory function, we constructed a model network following a previously established graphical notation [55, 56] (Fig. 1). Specifically, we modelled a mutual repression network with negative autoregulation (MRN-NA) that extends our previously published mutual repression network (MRN) [12]. The network consists of proteins y(1), y(2) and y(3). An input signal S induces the synthesis of y(1), which activates and represses the syntheses of y(2) and y(3), respectively. The syntheses of y(2) and y(3) are mutually repressed with cooperativity. Moreover, negative autoregulation controls the synthesis reactions of y(2) and y(3) (Fig. 1). The addition of negative autoregulation permits to investigate how negative feedback loops may affect memory. While modulation of several parameter values should tune the presence of memory, the present analysis focuses on a direct back-to-back comparison of the two network architectures that only differ in the addition of negative autoregulation circuits. To compare the memory regions of MRN-NA and MRN networks across our deterministic, stochastic, and theoretical analyses, the values of the corresponding kinetic parameters as well as the steady state levels of y(2) and y(3) were conserved as much as possible in discrete parameter space [12] (Table 1; Additional file 1: Texts S1, S2) (see Methods).

Fig. 1
figure 1

Schematic of the mutual repression network model with negative autoregulation. Wiring diagram of the mutual repression network with negative autoregulation (MRN-NA) model. The added negative autoregulation reactions that regulate the protein synthesis of y(2) and y(3) are highlighted in red. All reaction rate constants k and dissociation constants K are listed in Tables 1 and 2. In this work, the MRN-NA is compared to the previously published MRN [12] that is identical with the exception of the negative autoregulation loops. In addition, a simplified wiring diagram is shown

Table 1 List of kinetic parameters

MRN-NA model exhibits robust memory

Similar to our previously reported results on the memory function of the MRN [12], the MRN-NA network exhibits robust memory functionality that depends on strong cooperativity in the mutual repression cycle, as expressed by high Hill coefficients n. Thus expectedly, the deterministic solution to the time evolution of the MRN-NA model showed persistent memory after the end of the period during which the signal S was applied for both Hill coefficients of n = 7 and n = 8 (Fig. 2a). Strong hysteresis behavior was observed for both y(2) and y(3) levels as a function of increasing and decreasing S (Fig. 2b, c; Additional file 1: Figure S1). In the deterministic case, bistability is sufficient to generate memory. Stochastic simulations of the MRN-NA model as described by the birth and death processes of Eqs. 1–3 revealed strong and persistent memory both with Hill coefficients n = 7 (Fig. 2d) and n = 8 (Fig. 2e). Importantly, the MRN alone could not sustain memory for either y(2) or y(3) at n = 7 in a stochastic context [12], but the MRN-NA yielded strong and sustained memory due to the addition of negative feedback loops (Fig. 2d; Additional file 1: Figure S2, S3).

Fig. 2
figure 2

Deterministic and stochastic simulations of the MRN-NA model. a Deterministic simulation of the time evolution of the concentrations proteins y(1), y(2) and y(3) for different Hill coefficients. The input signal S is applied from simulation steps 250 to 500. The dissociation constants are set to K(2) = K(4) = 43; all other corresponding parameter values are set as the same as the previously published MRN [12] (Table 2). The simulated time evolution is shown for y(2) and y(3) at n = 7 (red and magenta lines), and at n = 8 (black and cyan lines). b The hysteresis curves of y(2) at different Hill coefficients n = 7 and n = 8 are consistent with the numerical integration of the rate equations. c The hysteresis curves of y(3) at different Hill coefficients n = 7 and n = 8 are consistent with the numerical integration of the rate equations. d Trajectories of the stochastic simulation of y(1), y(2) and y(3) at Hill coefficient n = 7. e Trajectories of the stochastic simulation of y(1), y(2) and y(3) at Hill coefficient n = 8. f, g Stochastic fluctuations in the MRN and MRN-NA models during the interval from simulation step 270 to 500, and F during the period from simulation steps 500 to 750. g The coefficients of variation (CVs) from the simulated stochastic trajectories during the signal period as a function of changing dissociation constants K(2) = K(4)at Hill coefficient n = 8. For more optimal comparability between the two models, the parameters associated to the negative autoregulation reactions were tuned to conserve the high steady state levels between both models. Shown are the CVs of y(2) (MRN: red line; MRN-NA: black line), and y(3) (MRN: magenta line; MRN-NA: cyan line)

To further understand how the addition of negative autoregulation increases the robustness of the memory function, we sought to quantify and compare the intrinsic noise in the protein levels of y(2) and y(3) in the MRN and MRN-NA models. It is well established that noise or a stochastic perturbation can flip gene expression levels from one to the other steady state [40, 46]. To obtain a reliable and comparable estimate, we computed the coefficients of variation (CVs) of y(2) and y(3) during the active signal S period; the interval from simulation step 270 to 500 was chosen to allow the system to respond to the stimulus for 20 simulation steps. Our analysis of noise in protein levels y(2) and y(3) as a function of the dissociation constants K(2) = K(4) confirmed a systematic reduction of stochasticity in y(2) and y(3) levels upon the addition of negative autoregulation (Fig. 2f). While maintaining the same steady state levels, noise was dramatically reduced for y(3) (Fig. 2f). Of note, the lower steady state y(3) is more susceptible to noise. Similarly, intrinsic fluctuations in y(2) during the signal period were reduced in the MRN-NA compared to the MRN model in our stochastic simulations irrespective of the dissociation constants tested (Fig. 2f). Even more important to the stability of the steady states are the intrinsic fluctuations after the end of the signal period. The levels of intrinsic noise are overall slightly higher, likely because the signal does not stabilize gene expression levels any longer. However, our corresponding analysis reveals that the negative autoregulation reduces, as expected, variability in the system also after the stop of the signal (Fig. 2g; Additional file 1: Figure S4). Taken together, our results suggest that the negative autoregulation played a vital role in increasing the persistence of stochastic memory by reducing intrinsic noise.

Negative autoregulation enhances the memory region

Our analyses show that the MRN-NA model can exhibit memory, which however critically depends on persistent mutual repression with strong cooperativity. We next set out to systematically map its memory region as a function of Hill coefficient and dissociation constants K(2) = K(4) in comparison to the previously reported MRN [12] (Fig. 3). Successful memory was defined as sustained levels of y(2) and y(3) at or near their levels during the applied S after the end of the input signal S at simulation step 500 until the end of the simulations at simulation step 1000. Moreover, for direct comparison, the kinetic parameter k(7) was adjusted to conserve the steady-state levels between the two models [12] (Additional file 1: Text S1). Indeed, the high steady-state levels of both the models were set to the same expression values, while the low steady-state levels were tuned be as similar as possible. Memory was assessed in both the deterministic and stochastic context (see Methods). Accordingly, two memory regions were defined: deterministic memory and stochastic memory (Fig. 3).

Fig. 3
figure 3

Phase diagram of the memory regions for the MRN and the MRN-NA models. Comparison of the memory regions of the MRN and MRN-NA models. The deterministic (solid line) and stochastic (dotted line) memory regions of the MRN are shown together with the deterministic (green area) and stochastic (red area) memory regions of the MRN-NA as function of the Hill coefficient and dissociation constants

In both the MRN and MRN-NA models, deterministic memory could be observed for Hill coefficients of n = 4 or greater (Fig. 3). In contrast, stochastic memory not only required higher Hill coefficients, but also posed different limits on the MRN and MRN-NA models. Without negative autoregulation that suppresses noise, stochastic memory in the MRN model required a Hill coefficient of n = 8 or greater (Fig. 3). In the presence of negative autoregulation that reduces noise in protein levels, stochastic memory in the MRN-NA model could be observed from n = 7 (Fig. 3). Thus, the negative autoregulation relaxes the need for strong cooperativity to generate stochastic memory. Furthermore, the stochastic memory region for the MRN-NA model was found to be larger than for the MRN model (Fig. 3, red area), indicating that negative autoregulation of the MRN-NA enhances memory function by reducing intrinsic noise (Fig. 2f, g). While the deterministic memory regions in both networks are essentially the same (Fig. 3, green area), it was clearly more difficult to maintain a memory effect under noise in the MRN than the MRN-NA (Fig. 3, red area). The large discrepancy between the deterministic and stochastic memory areas indicate the general challenge of maintaining memory function in the context of stochasticity.

Stochastic potential of the MRN-NA

Prerequisite for memory is normally the presence of bistability in the system. Thus, a limited bistable regime could explain why such high levels of cooperativity in the mutual repression between y(2) and y(3) were required to yield memory. To map out the bistable regimes of the MRN-NA in comparison to the MRN [12], theoretical analyses based on the chemical master equations were employed to support the numerical simulation results.

First, the corresponding Fokker-Planck equations Eqs. 9–13 describing the time evolution of the probability densities for y(2) and y(3) levels were derived from the three rate equations Eqs. 1–3 (see Methods). We made use of the quasi-steady-state assumption to derive a one-variable equation for each system [12] (Additional file 1: Text S2). While there are inherent limitations to it [51], the quasi-steady-state assumption is widely used [46]. Since we focus on the steady-state at t → ∞, applying the quasi-steady-state assumption to y(2) or y(3) is mathematically reasonable. By using the Fokker-Planck equations, we next estimated the two-dimensional stochastic bistable regions, characterized through the presence of a double well potential, for the MRN and MRN-NA models as a function of the Hill coefficient and dissociation constants [12] (Fig. 4; Additional file 1: Figure S5; Texts S1–3).

Fig. 4
figure 4

Phase diagram of the stochastic bistable regions. Phase diagrams of the stochastic bistable regions as a function of Hill coefficient and dissociation constant in the mutual repression cycle, a for y(2) in the MRN (dotted line area) and MRN-NA (red area), and b for y(3) in the MRN (dotted line area) and MRN-NA (cyan area) models

In these theoretical analyses, both the MRN and the MRN-NA models permitted bistability starting from a Hill coefficient of n = 3 for both y(2) and y(3) (Fig. 4). However, the bistable regions in the MRN-NA were larger than those in the MRN model (Fig. 4). To display bistability in a stochastic context, the MRN model required dissociation constants K(2) = K(4) within a more constrained parameter space (Fig. 4). These analyses established that the MRN-NA more readily exhibits stochastic bistability than the MRN. Conversely, a limited bistability regime in the MRN in a stochastic context (Fig. 4) contributes to explaining why this network had a smaller stochastic memory region (Fig. 3).

Mean first-passage time of the MRN-NA model

Having established that the presence of memory in the MRN-NA model depended on both bistability and robustness of the steady states to stochasticity, we next sought to further explore the origins and limits of memory functionality in context of fluctuating protein levels. To this end, the stability of a steady state of a stochastic system can be estimated by the mean first-passage time (MFPT). The MFPT quantifies the average number of simulation steps it takes for a system to leave a favorable steady state due to random events. As such, the MFPT provides a useful description of the time-scale on which a phase transition is likely to happen [57,58,59]. Thus, because the presence of a stochastic bistable region only indicates the bimodality of gene expression as prerequisite for memory but not the presence of memory itself, a MFPT analysis can be helpful for identifying the precise conditions under which memory can be attained in a stochastic context.

Here, MFPT analyses were performed between the two stable steady states under the quasi-steady-state assumption [12] (Additional file 1: Figure S5) (see Methods). Specifically, we calculated the MFPTs of y(2) and y(3) for the MRN-NA model in comparison to the MRN [12] (Fig. 5); the lower and upper steady-states of y(2) and y(3) are denoted as\( y{(2)}_l^{st} \),\( y{(2)}_u^{st} \), \( y{(3)}_l^{st} \) and \( y{(3)}_u^{st} \), respectively. Similarly, the MFPTs of leaving the lower (TL) and upper (TU) steady states were calculated as a function of the Hill coefficient and dissociation constants in the mutual repression cycle for y(2) and y(3).

Fig. 5
figure 5

Mean first passage time (MFPT) analysis. a The logarithmic MFPTs of the lower and upper steady states of y(2) (TL and TU) and y(3) (TL and TU) are shown for the MRN and MRN-NA models as a function of the Hill coefficient n at dissociation constants K(2) = K(4) = 15. b The logarithmic MFPTs of the lower and upper steady states of y(2) and y(3) denoted as above are shown for the MRN and MRN-NA as a function of the dissociation constant K(2) = K(4) at the Hill coefficient n = 3. c The logarithmic MFPTs of the lower and upper steady states of y(2) and y(3) denoted as above are shown for the MRN-NA as a function of the negative autoregulation constants k(9) = k(10) at the dissociation constants K(2) = K(4) = 15 and Hill coefficient n = 3

Due to the asymmetry in the both MRN and MRN-NA models (Table 2), the MFPTs for y(2) and y(3) showed opposing behavior, as expected (Fig. 5). While the MFPT for y(2) to leave the lower steady state, \( {T}_L\left(y{(2)}_l^{st}\right) \), is consistently longer than the corresponding time to leave the upper steady state, \( {T}_U\left(y{(2)}_u^{st}\right) \), the system is more likely to leave the lower steady state of y(3) as indicated by \( {T}_L\left(y{(3)}_l^{st}\right) \)being much smaller than \( {T}_U\left(y{(3)}_u^{st}\right) \) (Fig. 5).

Table 2 Kinetic parameter values for the MRN and MRN-NA models

An increase in the Hill coefficient as measure of cooperativity exacerbated this trend observed for the MFPTs of y(2) and y(3) (Fig. 5a). The prolonged MFPTs as a result of increasing Hill coefficients explained why the robustness of sustaining stochastic memory improved with increasing cooperativity. For the parameter space explored, the lower steady state of y(2) was more persistent in both models for any Hill coefficients, as evident by \( {T}_L\left(y{(2)}_l^{st}\right) \) > \( {T}_U\left(y{(2)}_u^{st}\right) \)(Fig. 5a). Moreover, all MFPTs for the MRN-NA, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), and \( {T}_U\left(y{(3)}_u^{st}\right) \), were observed to be longer than their equivalents for the MRN model, for any Hill coefficient (Fig. 5a). These results confirm an increased persistence of the steady states upon introduction of negative autoregulation in MRN-NA compared to MRN, which explains the improved memory function in the MRN-NA system. Specifically, the residence times at the upper and lower steady states of y(2) and y(3) were extended to achieve the persistent memory after the stop of the signal.

Next, we investigated the effect of the dissociation constants in the mutual repression cycle on the MFPTs of y(2) and y(3) (Fig. 5b). Consistently, the MFPTs for all steady states, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \),were found to be longer in the MRN-NA than the MRN for any dissociation constants (Fig. 5b). For both models, all MFPTs, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \)gradually decreased with an increase in the dissociation constant, suggesting that strong binding in the mutual repression cycle is necessary for persistent memory. We also studied the MFPTs for the MRN-NA as a function of the negative autoregulation constants k(9) = k(10) (Fig. 5c). The MFPTs for the MRN-NA at all steady states, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \), were found to increase with increasing strength of the negative autoregulation. In summary, these MFPT analyses illuminated in detail how memory in the mutual repression networks improved with increasing stability of the steady states as a function of cooperativity or binding strength.

Probability densities of the steady-state levels in the MRN-NA

The previous analyses revealed a strong dependency of sustained memory on the robustness of the steady-states. To formally establish the probability densities associated with populating the upper and lower steady states of y(2) and y(3), we calculated the probability densities from the Fokker-Planck Equations (Eq. (15)) as a function of the Hill coefficient and dissociation constants (Fig. 6). The probability density of the upper steady state of y(2) increased while that of the corresponding lower steady state decreased with an increase in the Hill coefficient for both the MRN and the MRN-NA models (Fig. 6a). The addition of negative autoregulation in the MRN-NA system visibly made the upper steady state level of y(2) more dominant (Fig. 6a).

Fig. 6
figure 6

Probability density of the steady-state levels. a Probability density of y(2) in the MRN and the MRN-NA as a function of the Hill coefficient, and b probability density of y(3) in the MRN and the MRN-NA as a function of the Hill coefficient at the dissociation constant K(2) = K(4) = 12. c Probability density of y(2) in the MRN and the MRN-NA as a function of the dissociation constant. d Probability density of y(3) in the MRN and the MRN-NA as a function of the dissociation constant at the Hill coefficient n = 3

The probability density of y(3) clearly illustrated the evolution of bistability and subsequent increased probability densities of the lower steady states with increasing Hill coefficient (Fig. 6b). At low Hill coefficient (n = 3) the MRN lacked a pronounced probability density of a lower steady state (Fig. 6b, solid blue line) while in the MRN-NA the probability density did not separate well the peaks at lower and upper levels of y(3) (Fig. 6b, dashed blue line). This result of weak bistability was consistent with our previous analysis of the bistability regions (Fig. 4). In turn, higher Hill coefficients of n = 8 and n = 12 promoted first the emergence and then separation of two clearly populated steady states (Fig. 6b). While the upper steady state of y(3) was dominant for n = 12 the system switched to a dominant lower steady state for y(3) (Fig. 6b). Moreover, the addition of negative autoregulation strongly promoted the population of the lower steady state at all Hill coefficients analyzed (Fig. 6b, dashed lines). By buffering intrinsic noise, this analysis thus confirmed that the negative feedback loops stabilized particularly the low expression states of y(3), i.e. those that are most susceptible to fluctuations.

Similarly, the probability densities of y(2) and y(3) were found to quantitatively and qualitatively also depend on the dissociation constants K(2) and K(4) in the mutual repression cycle (Fig. 6c, d). At low K(2) = K(4) = 15, the upper steady state of y(2) was dominant in the MRN and MRN-NA models with slightly higher probability density for the MRN-NA model (Fig. 6c). In general, with decreasing values of K(2) and K(4) the upper steady states became more pronounced in both models, even more so in the MRN-NA (Fig. 6c). In turn, the probability density of the lower steady state of y(3) became dominant with decreasing the dissociation constants in both the models (Fig. 6d). Moreover, lower values for K(2) and K(4) as well as negative autoregulation favored more pronounced bistability (Fig. 6d). The MRN-NA model increased the probability density of the upper steady-state of y(2) and the lower steady-state of y(3) more than the MRN model. This result can support the persistent memory of the upper steady-state of y(2) and lower steady-state of y(3) after the stop of the signal (Fig. 6d). Taken together, these results suggest a theoretical basis for our observation that the MRN-NA model displays an improved memory function in a stochastic context.

Consistency between stochastic simulation and the Fokker-Planck equations

To verify the quasi-steady-state assumption for our case, we validated that the solutions to the Fokker-Planck equations were consistent with the probability densities obtained by stochastic simulation of the full system (Figs. 7 and 8). The Fokker-Planck equations provided almost the same probability density as the Gillespie stochastic simulation for initial molecule numbers ofy(1) = y(2) = y(3) = 1(Figs. 7 and 8). The main limitation of the quasi-steady-state assumption is an over-estimation of the separation of the steady states ofy(2). However, in the MRN-NA y(3) is more susceptible to fluctuations and the driver for switching between steady states. Consequently, the results obtained under the quasi-steady-state assumption are meaningful for investigating the memory function in these mutual repression networks.

Fig. 7
figure 7

Probability density profile of the MRN-NA model. Probability densities are computed from the Fokker-Planck equations of the MRN-NA model. a Probability density of y(2), and b probability density of y(3). The parameters are given as S = 0, k(1) = 100, k(2) = 1, K(1) = K(3) = 9, K(2) = K(4) = 43, K(5) = K(6) = 9, k(3) = k(6) = 18.1, k(9) = k(10) = 4.1, k(4) = 61.04> k(7) = 43.1, k(5) = k(8) = 0.8, n = 8 for the MRN-NA model

Fig. 8
figure 8

Probability density profile of the MRN and MRN-NA models in log space. Probability densities are computed from the Fokker-Planck equations of the MRN and MRN-NA models. a, b Probability density of y(2) in A the MRN and b the MRN-NA models. c, d Probability density of y(3) in C the MRN and d the MRN-NA models. The parameters for the MNA model are S = 0, k(1) = 100,k(2) = 1,K(1) = K(3) = 9, K(2) = K(4) = 43, k(3) = k(6) = 18.1, k(4) = 61.23> k(7) = 43.1, k(5) = k(8) = 0.8, n = 8. The parameters for the MRN-NA model are S = 0,k(1) = 100,k(2) = 1,K(1) = K(3) = 9,K(2) = K(4) = 43,K(5) = K(6) = 9, k(3) = k(6) = 18.1, k(9) = k(10) = 4.1, k(4) = 61.04> k(7) = 43.1, k(5) = k(8) = 0.8, n = 8. For stochastic simulations, three independent trajectories are shown

Discussion

By applying numerical integration of the rate equations, stochastic simulations and theoretical analysis of the Fokker-Planck equations, we investigated if negative autoregulation can improve the memory function in a mutual repression network. Previous work had established the regulated mutual activation network (MAN) and mutual repression network (MRN) as good model systems to study fundamental properties of cellular memory [12]. Here, we have systematically decoupled contributions of additional negative autoregulation (MRN-NA) to the persistence of the memory functionality. In general and as expected, stochasticity decreased the memory function in all models. The addition of negative autoregulation in the MRN-NA however extended robust and persistent memory in both the deterministic and stochastic approaches to lower levels of cooperativity in the mutual repression cycle. Our results thus suggest that, in addition to the added negative autoregulation, the robustness of the stochastic memory of the MRN-NA could be further improved by increasing the binding strength of the repressor proteins.

In the present work, select sets of kinetic parameters that can give rise to memory were used instead of exhaustive searches for alternative parameter combinations. Despite accurately simulating the time evolution of protein concentrations in stochastic systems, the Gillespie algorithm is known for its limited power in characterizing sustained memory in more complex networks due to its computational cost that can quickly become prohibitive. To generalize some of our findings, phase diagrams of the memory region were derived as a function of the two most critical and previously identified parameters. However and as evident from the small stochastic memory regions, finely-tuned parameter combinations were necessary to achieve robust memory. This observation suggested both a biological challenge of implementing a robust memory circuit, as well as a theoretical challenge in rationalizing its mechanistic basis through exhaustive parameter perturbation.

Theoretical approaches that can complement these shortcomings and make accessible the exploration of larger parameter ranges often rely on additional assumptions. Notably, the quasi-steady state assumption of separately analyzing steady states is widely used, performs in general well and is reasonably justified in the presented work. Seminal advances have now provided advanced approximation methods that start to render even multivariate and nonlinear chemical masters equations and related Fokker-Planck equations amenable to theoretical analyses. For instance, the recently introduced linear-mapping approximation converts a nonlinear system to a linear problem via a mean-field approximation [54]. This and related techniques will vastly improve accuracy and computational efficiency for future analyses of stochasticity in more complex networks.

Biology has certainly evolved more complex architectures, but in most cases the trade-offs between network complexity and function remain poorly understood. For instance, while the MAN network was found to yield robust memory based on mutual activation [12], comparable results for the mutual repression network could only be obtained after addition of negative autoregulation, i.e. by a more complex network architecture. More complex network architectures are often more expensive to maintain, but may yield advantages in fidelity of their robustness and dynamic control. To this end, several related cellular networks of increased complexity have been discovered and await further characterization. For instance, a Notch-Delta mutual repression network serves to communicate between neighboring cells [60] where an increase in Notch activity within a cell decreases that in its neighboring cell. The Notch-Delta mutual repression provides inhomogeneous or opposite protein synthesis in homogeneous cell populations that however depends on spatial changes in gene expression.

Finally, recent papers have outlined many fundamental principles of how to achieve bistability in small networks [61,62,63]. The present work here extends these findings by a comparative network analysis that delineates the effect of additional negative autoregulation on cellular memory.

Conclusions

We have shown that the addition of negative autoregulation can reduce intrinsic noise and generate persistent stochastic memory in a mutual repression network. Our mathematical comparison and theoretical analyses contribute to an improved understanding of how genetic circuits can encode biological function and may aid the rational engineering of memory networks for applications in synthetic biology, medicine, and biotechnology.

Methods

MRN-NA model

We constructed a simple model of a gene regulatory network that consisted of two genes encoding a transcription factor that we visualized according to a previously established graphical notation [55, 56] (Fig. 1). Our mutual repression network with negative autoregulation (MRN-NA) consisted of proteins y(1), y(2) and y(3). The input signal S induced the synthesis of y(1), which in turn activated the synthesis of y(2) and repressed production of y(3). Furthermore, the synthesis reactions of both of y(2) and y(3) were mutually repressed with cooperativity. Negative autoregulation governs synthesis reactions of y(2) and y(3). This system can be bistable and exhibits memory. In formulating our model, we made deliberate use of Michaelis-Menten approximations under the assumption that substrate concentrations were in excess and association equilibria quickly attained. The model is described by the following eqs. 1–3:

$$ \frac{dy(1)}{dt}=k(1).S-k(2).y(1) $$
(1)
$$ \frac{dy(2)}{dt}=k(3).\frac{y(1)}{y(1)+K(1)}+k(4).\frac{K{(2)}^n}{y{(3)}^n+K{(2)}^n}+k(9).\frac{K(5)}{y(2)+K(5)}-k(5).y(2) $$
(2)
$$ \frac{dy(3)}{dt}=k(6).\frac{K(3)}{y(1)+K(3)}+k(7).\frac{K{(4)}^n}{y{(2)}^n+K{(4)}^n}+k(10).\frac{K(6)}{y(3)+K(6)}-k(8).y(3) $$
(3)

All parameters are described in Table 1.

Systems modeling and determination of successful memory

The time evolution of the gene expression levels of y(1), y(2) and y(3) was simulated both deterministically and stochastically. For a deterministic systems description, the ordinary differential equations (Eqs. 1–3) were solved in MATLAB (Mathworks) with standard solvers. Stochastic trajectories were simulated with the Gillespie algorithm [64].

In all model simulations, the deterministic and stochastic time courses of y(1), y(2) and y(3) were simulated for 1000 simulation time steps. The input signal S was applied from simulation step 250 to 500. Deterministic memory was defined as sustained protein levels in the numerical integration of the rate equations during the subsequent period from simulation steps 500 to 1000 after stop of the signal S. Similarly, stochastic memory was assessed as sustained protein levels in the stochastic simulations in the period from simulation steps 500 to 1000 after stop of the signal S. While arbitrary, a threshold of 1000 simulation time steps yielded a robust and readily accessible criterium to assess memory. Due to the probabilistic nature of the stochastic simulations, robust stochastic memory for a given set of parameters was defined as successful memory if 18 out of 20 stochastic simulations, i.e. 90%, yielded persistent memory [12].

Theoretical model comparisons

All parameters were set as to render the MRN and MRN-NA models as comparable as possible (Tables 1, 2) [12, 65]. With the exception of the additional negative autoregulation loops, this meant using the same parameters throughout. The parameters of the negative autoregulation were tuned to conserve the steady state levels of y(2) and y(3) between the MRN and MRN-NA models (Tables 1, 2). Indeed, the high steady-state levels of both the models were set to the same values, while the low steady-state levels to be as similar as possible. Note that it is not possible to also tune the low steady-state levels to the exact same values. Given the asymmetry of the models (activation of y(2) and suppression of y(3) by S), the deterministic steady-state levels of y(2) and y(3) always show opposing behavior: when the steady-state level of y(2) is high, that of y(3) is low and vice versa. Our parameters choices conserved both the low and high steady states of y(2) and y(3) between the MRN and MRN-NA models [12] (Additional file 1: Texts S1, S2).

To systematically identify parameter choices that can give rise to successful memory, phase diagrams of the memory region as a function of the two most critical parameters, the Hill coefficient and dissociation constants in the mutual repression cycle, were computed by successive simulation at different parameter combinations.

Intrinsic noise in gene expression was quantified by computing the coefficient of variation (CV) of the levels of y(2) and y(3) during the input signal S from the stochastic simulations. To eliminate transition effects, we considered the period from simulation step 270 to 500, i.e. omitting the first 20 simulation steps of the signal period (Fig. 2f). In the same manner, we estimated the CVs of the levels of y(2) and y(3) after stop of the signal, for the period from simulation step 500 to 750 (Fig. 2g).

Stochastic potential and probability density function

The reaction rate equations (Eqs. 1–3) were converted into fMRN − NA2(y) and fMRN − NA3(y) (Additional file 1: Text S2). Using the quasi-steady-state assumption, we solved the probability density at the steady state (at t → ∞). At t → ∞ y(2) and y(3) definitely approach to the steady state, i.e., \( \frac{dy(2)}{dt}\to 0 \) and\( \frac{dy(3)}{dt}\to 0 \). Therefore, we assumed \( \frac{dy(3)}{dt}=0 \)to solve the ODE of \( \frac{dy(2)}{dt} \)at t → ∞. In a similar manner, we assumed \( \frac{dy(2)}{dt}=0 \)to solve the ODE of \( \frac{dy(3)}{dt} \) at t → ∞.

Here, we illustrated how a one-variable equationfMRN − NA2(y) in a stochastic environment is given by:

$$ \frac{dy}{dt}={f}_{MRN- NA2}(y)=k(4).\frac{K{(2)}^n}{{y_{SS}}^n+K{(2)}^n}+k(9).\frac{K(5)}{y+K(5)}-k(5).y $$
(4)

where

$$ {\displaystyle \begin{array}{l}{y}_{ss}=\Big[\left\{k(6).\frac{K(1)}{y+K(1)}+k(7).\frac{K{(2)}^n}{y^n+K{(2)}^n}-k(5).K(5)\right\}\\ {}+\Big\{{\left(k(6).\frac{K(1)}{y+K(1)}+k(7).\frac{K{(2)}^n}{y^n+K{(2)}^n}-k(5).K(5)\right)}^2\\ {}+4.k(5).K(5).\left(k(9)+k(6).\frac{K(1)}{y+K(1)}+k(7).\frac{K{(2)}^n}{y^n+K{(2)}^n}\right)\left\}{}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\right].\frac{1}{2.k(5)}\end{array}} $$
(5)

This equation can be described by the birth-and-death stochastic processes [12, 21, 49, 66]:

$$ {W}_{birth}(y)=k(4).\frac{K{(2)}^n}{{y_{SS}}^n+K{(2)}^n}+k(9).\frac{K(5)}{y+K(5)} $$
(6)
$$ {W}_{death}(y)=k(5).y $$
(7)

where yssis given by Eq. (5).

The corresponding chemical master equation was given by:

$$ {\displaystyle \begin{array}{c}\frac{\partial P\left(y,t\right)}{\partial t}={W}_{birth}\left(y-1\right)P\left(y-1,t\right)+{W}_{death}\left(y+1\right)P\left(y+1,t\right)\\ {}-\left\{{W}_{birth}(y)+{W}_{death}(y)\right\}P\left(y,t\right)\end{array}} $$
(8)

where P(y, t) was the probability density of protein concentration y. Next, the chemical master equation was transformed into the Fokker-Planck equation [12, 21, 50, 66,67,68]:

$$ \frac{\partial P\left(y,t\right)}{\partial t}=-\frac{\partial }{\partial y}\left[A(y)P\left(y,t\right)\right]+\frac{1}{2}\frac{\partial^2}{\partial {y}^2}\left[B(y)P\left(y,t\right)\right] $$
(9)

where

$$ A(y)=k(4).\frac{K{(2)}^n}{{y_{SS}}^n+K{(2)}^n}+k(9).\frac{K(5)}{y+K(5)}-k(5).y $$
(10)

and yssis given by Eq. (5). The noise function is given by:

$$ B(y)=k(4).\frac{K{(2)}^n}{{y_{SS}}^n+K{(2)}^n}+k(9).\frac{K(5)}{y+K(5)}+k(5).y $$
(11)

where yss is given by Eq. (5). In the same manner, the Fokker-Planck equations of the four one-variable equations including fMRN − NA2(y) were solved under the following conditions (Additional file 1: Text S2):

$$ A(y)=\Big\{{}_{f_{MRN- NA3}(y)\kern2.5em for\ y=y(3)}^{f_{MRN- NA2}(y)\kern2.5em for\ y=y(2)}\ \mathrm{fortheMRN}-\mathrm{NA}\ \mathrm{model} $$
(12)

and the noise functions were given by:

$$ B(y)=\Big\{{}_{g_{MRN- NA3}(y)\kern2.5em for\ y=y(3)}^{g_{MRN- NA2}(y)\kern2.5em for\ y=y(2)}\ \mathrm{for}\ \mathrm{the}\ \mathrm{MRN}-\mathrm{NA}\ \mathrm{model} $$
(13)

Finally, we consider the stochastic potential analysis. The limit of P(y, t) at t → ∞ yields Pst(y), the stationary probability density function of y [12, 21, 59, 66], which is given by:

$$ {P}_{st}(y)=\frac{N_c}{B(y)}\exp \left[2{\int}^y\frac{A(z)}{B(z)} dz\right] $$
(14)

where Nc is the normalization constant. Eq. (14) can be recast in the form:

$$ {P}_{st}(y)={N}_c{e}^{-2{\varPhi}_s(y)} $$
(15)

where

$$ {\varPhi}_s(y)=\frac{1}{2}\ln \left[B(y)\right]-{\int}^y\frac{A(z) dz}{B(z)} $$
(16)

is called the stochastic potential of f(y) [57, 58, 66].

Mean first-passage time analysis

In gene expression, the stability of a steady state has to be estimated in the presence of noise. The stability of a steady state of a stochastic system can be estimated by the mean first-passage time (MFPT), which describes the expected time within which the system leaves a stable steady state due to random fluctuations. An equilibrium point can exit from its minimum potential due to the effect of noise. The exit time depends on the specific realization of the random process; this is known as the first passage time. The MFPT is the average of the first passage times over many realizations. In the context of anticipating phase shifts, the MFPT provides a useful tool to characterize the time-scale on which a phase transition is likely to occur.

Let us consider \( {y}_l^{\mathrm{st}} \)and \( {y}_u^{\mathrm{st}} \)(\( {y}_l^{\mathrm{st}}<{y}_u^{\mathrm{st}} \)) as two steady states corresponding to a low and a high protein concentration, respectively, separated by the unstable steady state defining the potential barrier \( {y}_b^{\mathrm{un}} \) (i.e., the unstable equilibrium point). The basin of attraction of the state \( {y}_u^{\mathrm{st}} \) extends from\( {y}_b^{\mathrm{un}} \) to +∞, as it is to the right of\( {y}_l^{\mathrm{st}} \). Let T(y) be the MFPT to state \( {y}_b^{\mathrm{un}} \) starting at \( y>{y}_b^{\mathrm{un}} \). T(y) satisfies the following ordinary differential equation [21, 59, 67, 68]:

$$ A(y)\frac{\partial T(y)}{\partial y}+\frac{1}{2}B(y)\frac{\partial^2T(y)}{\partial {y}^2}=-1 $$
(17)

with boundary conditions:

$$ T\left({y}_b^{un}\right)=0\kern0.24em \mathrm{and}\ \frac{\partial T\left(+\infty \right)}{\partial y}=0 $$
(18)

By solving Eqs. 17–18, the MFPTs of \( {y}_u^{\mathrm{st}} \) and\( {y}_l^{\mathrm{st}} \): \( {T}_U\left({y}_u^{\mathrm{st}}\right) \) and \( {T}_L\left({y}_l^{\mathrm{st}}\right) \) are calculated to state \( {y}_b^{\mathrm{un}} \) for the basin of attraction of the state \( {y}_u^{\mathrm{st}} \) extending from\( {y}_b^{\mathrm{un}} \) to +∞ and for the basin of attraction of the state \( {y}_l^{\mathrm{st}} \) which extends from 0 to\( {y}_b^{\mathrm{un}} \), respectively, as follows:

$$ {T}_L\left({y}_l^{st}\right)=2\underset{y_l^{st}}{\overset{y_b^{un}}{\int }}\frac{1}{\varPsi (x)} dx.\underset{0}{\overset{x}{\int }}\frac{\varPsi (z)}{B(z)} dz $$
(19)
$$ {T}_U\left({y}_u^{st}\right)=2\underset{y_b^{un}}{\overset{y_u^{st}}{\int }}\frac{1}{\varPsi (x)} dx.\underset{x}{\overset{\infty }{\int }}\frac{\varPsi (z)}{B(z)} dz $$
(20)

where

$$ \varPsi (y)=\exp \left(\underset{y_0}{\overset{y}{\int }}\frac{2A(w)}{B(w)} dw\right) $$
(21)

with y0 = 0 for the \( {y}_u^{\mathrm{st}}\to {y}_l^{\mathrm{st}} \)transition and \( {y}_0={y}_b^{\mathrm{un}} \) for the \( {y}_l^{\mathrm{st}}\to {y}_u^{\mathrm{st}} \)transition. A high value of the MFPT of a steady state protein level indicates that the level is sustained for a longer time, whereas a low value indicates that the protein can readily leave the steady state and quickly transition to another level.

Availability of data and materials

The computer code used to perform all analyses and reproduce all project data is available at https://github.com/ABMSUH/MRN-and-MRN-NA-models

Abbreviations

CV:

Coefficient of variation

MAN:

Mutual activation network

MFPT:

Mean first-passage time

MRN:

Mutual repression network

mRNA:

Messenger ribonucleic acid

MRN-NA:

Mutual repression network with negative autoregulation

ODE:

Ordinary differential equation

References

  1. Kandel ER, Dudai Y, Mayford MR. The molecular and systems biology of memory. Cell. 2014;157(1):163–86.

    Article  CAS  PubMed  Google Scholar 

  2. Casadesus J, D'Ari R. Memory in bacteria and phage. BioEssays. 2002;24(6):512–8.

    Article  CAS  PubMed  Google Scholar 

  3. Harvey ZH, Chen Y, Jarosz DF. Protein-based inheritance: epigenetics beyond the chromosome. Mol Cell. 2018;69(2):195–202.

    Article  CAS  PubMed  Google Scholar 

  4. Bergeron-Sandoval LP, Safaee N, Michnick SW. Mechanisms and consequences of macromolecular phase separation. Cell. 2016;165(5):1067–79.

    Article  CAS  PubMed  Google Scholar 

  5. Burrill DR, Silver PA. Making cellular memories. Cell. 2010;140(1):13–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Shen K, Teruel MN, Connor JH, Shenolikar S, Meyer T. Molecular memory by reversible translocation of calcium/calmodulin-dependent protein kinase II. Nat Neurosci. 2000;3(9):881–6.

    Article  CAS  PubMed  Google Scholar 

  7. Xiong W, Ferrell JE Jr. A positive-feedback-based bistable 'memory module' that governs a cell fate decision. Nature. 2003;426(6965):460–5.

    Article  CAS  PubMed  Google Scholar 

  8. Wang L, Walker BL, Iannaccone S, Bhatt D, Kennedy PJ, Tse WT. Bistable switches control memory and plasticity in cellular differentiation. Proc Natl Acad Sci U S A. 2009;106(16):6638–43.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Doncic A, Atay O, Valk E, Grande A, Bush A, Vasen G, et al. Compartmentalization of a bistable switch enables memory to cross a feedback-driven transition. Cell. 2015;160(6):1182–95.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Herrera-Delgado E, Perez-Carrasco R, Briscoe J, Sollich P. Memory functions reveal structural properties of gene regulatory networks. PLoS Comput Biol. 2018;14(2):e1006003.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  11. Ajo-Franklin CM, Drubin DA, Eskin JA, Gee EP, Landgraf D, Phillips I, et al. Rational design of memory in eukaryotic cells. Genes Dev. 2007;21(18):2271–6.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Ul Hasan ABMS, Kurata H. Mathematical comparison of memory functions between mutual activation and repression networks in a stochastic environment. J Theor Biol. 2017;427:28–40.

    Article  Google Scholar 

  13. Alon U. Network motifs: theory and experimental approaches. Nat Rev Genet. 2007;8(6):450–61.

    Article  CAS  PubMed  Google Scholar 

  14. Brandman O, Ferrell JE Jr, Li R, Meyer T. Interlinked fast and slow positive feedback loops drive reliable cell decisions. Science. 2005;310(5747):496–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403(6767):335–8.

    Article  CAS  PubMed  Google Scholar 

  16. Freeman M. Feedback control of intercellular signalling in development. Nature. 2000;408(6810):313–9.

    Article  CAS  PubMed  Google Scholar 

  17. Hasty J, Pradines J, Dolnik M, Collins JJ. Noise-based switches and amplifiers for gene expression. Proc Natl Acad Sci U S A. 2000;97(5):2075–80.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Chang DE, Leung S, Atkinson MR, Reifler A, Forger D, Ninfa AJ. Building biological memory by linking positive feedback loops. Proc Natl Acad Sci U S A. 2010;107(1):175–80.

    Article  CAS  PubMed  Google Scholar 

  19. Ferrell JE Jr. Self-perpetuating states in signal transduction: positive feedback, double-negative feedback and bistability. Curr Opin Cell Biol. 2002;14(2):140–8.

    Article  CAS  PubMed  Google Scholar 

  20. Shopera T, Henson WR, Ng A, Lee YJ, Ng K, Moon TS. Robust, tunable genetic memory from protein sequestration combined with positive feedback. Nucleic Acids Res. 2015;43(18):9086–94.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Cheng Z, Liu F, Zhang XP, Wang W. Robustness analysis of cellular memory in an autoactivating positive feedback system. FEBS Lett. 2008;582(27):3776–82.

    Article  CAS  PubMed  Google Scholar 

  22. Kim TH, Jung SH, Cho KH. Interlinked mutual inhibitory positive feedbacks induce robust cellular memory effects. FEBS Lett. 2007;581(25):4899–904.

    Article  CAS  PubMed  Google Scholar 

  23. Acar M, Becskei A, van Oudenaarden A. Enhancement of cellular memory by reducing stochastic transitions. Nature. 2005;435(7039):228–32.

    Article  CAS  PubMed  Google Scholar 

  24. Gardner TS, Cantor CR, Collins JJ. Construction of a genetic toggle switch in Escherichia coli. Nature. 2000;403(6767):339–42.

    Article  CAS  PubMed  Google Scholar 

  25. Frigola D, Casanellas L, Sancho JM, Ibanes M. Asymmetric stochastic switching driven by intrinsic molecular noise. PLoS One. 2012;7(2):e31407.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Cherry JL, Adler FR. How to make a biological switch. J Theor Biol. 2000;203(2):117–33.

    Article  CAS  PubMed  Google Scholar 

  27. Ferrell JE, Xiong W. Bistability in cell signaling: how to make continuous processes discontinuous, and reversible processes irreversible. Chaos. 2001;11(1):227–36.

    Article  CAS  PubMed  Google Scholar 

  28. Pedraza JM, Paulsson J. Effects of molecular memory and bursting on fluctuations in gene expression. Science. 2008;319(5861):339–43.

    Article  CAS  PubMed  Google Scholar 

  29. Tabor JJ, Salis HM, Simpson ZB, Chevalier AA, Levskaya A, Marcotte EM, et al. A synthetic genetic edge detection program. Cell. 2009;137(7):1272–81.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Auslander S, Auslander D, Muller M, Wieland M, Fussenegger M. Programmable single-cell mammalian biocomputers. Nature. 2012;487(7405):123–7.

    Article  PubMed  CAS  Google Scholar 

  31. Daniel R, Rubens JR, Sarpeshkar R, Lu TK. Synthetic analog computation in living cells. Nature. 2013;497(7451):619–23.

    Article  CAS  PubMed  Google Scholar 

  32. Becskei A, Seraphin B, Serrano L. Positive feedback in eukaryotic gene networks: cell differentiation by graded to binary response conversion. EMBO J. 2001;20(10):2528–35.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  33. Raj A, van Oudenaarden A. Nature, nurture, or chance: stochastic gene expression and its consequences. Cell. 2008;135(2):216–26.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  34. McAdams HH, Arkin A. Stochastic mechanisms in gene expression. Proc Natl Acad Sci U S A. 1997;94(3):814–9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Ozbudak EM, Thattai M, Kurtser I, Grossman AD, van Oudenaarden A. Regulation of noise in the expression of a single gene. Nat Genet. 2002;31(1):69–73.

    Article  CAS  PubMed  Google Scholar 

  36. Sneppen K, Micheelsen MA, Dodd IB. Ultrasensitive gene regulation by positive feedback loops in nucleosome modification. Mol Syst Biol. 2008;4:182.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297(5584):1183–6.

    Article  CAS  PubMed  Google Scholar 

  38. Blake WJ, Kærn M, Cantor CR, Collins JJ. Noise in eukaryotic gene expression. Nature. 2003;422(6932):633–7.

    Article  CAS  PubMed  Google Scholar 

  39. Pedraza JM, van Oudenaarden A. Noise propagation in gene networks. Science. 2005;307(5717):1965–9.

    Article  CAS  PubMed  Google Scholar 

  40. Eldar A, Elowitz MB. Functional roles for noise in genetic circuits. Nature. 2010;467(7312):167–73.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Zhang H, Chen Y, Chen Y. Noise propagation in gene regulation networks involving interlinked positive and negative feedback loops. PLoS One. 2012;7(12):e51840.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Tan C, Marguet P, You L. Emergent bistability by a growth-modulating positive feedback circuit. Nat Chem Biol. 2009;5(11):842–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Basu S, Gerchman Y, Collins CH, Arnold FH, Weiss R. A synthetic multicellular system for programmed pattern formation. Nature. 2005;434(7037):1130–4.

    Article  CAS  PubMed  Google Scholar 

  44. Moon TS, Lou C, Tamsir A, Stanton BC, Voigt CA. Genetic programs constructed from layered logic gates in single cells. Nature. 2012;491(7423):249–53.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Kurata H, Maeda K, Onaka T, Takata T. BioFNet: biological functional network database for analysis and synthesis of biological systems. Brief Bioinform. 2014;15(5):699–709.

    Article  PubMed  Google Scholar 

  46. Thattai M, van Oudenaarden A. Intrinsic noise in gene regulatory networks. Proc Natl Acad Sci U S A. 2001;98(15):8614–9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Madar D, Dekel E, Bren A, Alon U. Negative auto-regulation increases the input dynamic-range of the arabinose system of Escherichia coli. BMC Syst Biol. 2011;5:111.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Dublanche Y, Michalodimitrakis K, Kummerer N, Foglierini M, Serrano L. Noise in transcription negative feedback loops: simulation and experimental analysis. Mol Syst Biol. 2006;2:41.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  49. Thomas P, Popovic N, Grima R. Phenotypic switching in gene regulatory networks. Proc Natl Acad Sci U S A. 2014;111(19):6994–9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  50. Schnoerr D, Sanguinetti G, Grima R. Approximation and inference methods for stochastic biochemical kinetics-a tutorial review. J Phys A: Math Theor. 2018;51(16):1–73.

  51. Hornos JE, Schultz D, Innocentini GC, Wang J, Walczak AM, Onuchic JN, et al. Self-regulating gene: an exact solution. Phys Rev E. 2005;72(5 Pt 1):051907.

    Article  CAS  Google Scholar 

  52. Michael EW. Quantitative biology: from molecular to cellular systems. Boca Raton: Chapman & Hall/CRC Mathematical and Computational Biology. CRC Press; 2012.

    Google Scholar 

  53. Thomas P, Straube AV, Grima R. Communication: limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks. J Chem Phys. 2011;135(18):181103.

    Article  PubMed  CAS  Google Scholar 

  54. Cao Z, Grima R. Linear mapping approximation of gene regulatory networks with stochastic dynamics. Nature Comms. 2018;9(1):3305.

    Article  CAS  Google Scholar 

  55. Kurata H, Matoba N, Shimizu N. CADLIVE for constructing a large-scale biochemical network based on a simulation-directed notation and its application to yeast cell cycle. Nucleic Acids Res. 2003;31(14):4071–84.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  56. Kurata H, Inoue K, Maeda K, Masaki K, Shimokawa Y, Zhao Q. Extended CADLIVE: a novel graphical notation for design of biochemical network maps and computational pathway analysis. Nucleic Acids Res. 2007;35(20):e134.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  57. Gardiner CW. Stochastic Methods: A Handbook for the Natural and Social Sciences, Springer Series in Synergetics. 4th ed. Berlin: Springer–Verlag; 2009.

    Google Scholar 

  58. Risken H, Frank T. The Fokker–Planck Equation: Methods of Solution and Applications. 2nd ed. Berlin: Springer; 1996.

    Google Scholar 

  59. Sharma Y, Dutta PS, Gupta AK. Anticipating regime shifts in gene expression: the case of an autoactivating positive feedback loop. Phys Rev E. 2016;93(3):032404.

    Article  PubMed  CAS  Google Scholar 

  60. Matsuda M, Koga M, Woltjen K, Nishida E, Ebisuya M. Synthetic lateral inhibition governs cell-type bifurcation with robust ratios. Nature Comms. 2015;6:6195.

    Article  CAS  Google Scholar 

  61. Lipshtat A, Loinger A, Balaban NQ, Biham O. Genetic toggle switch without cooperative binding. Phys Rev Lett. 2006;96(18):188101.

    Article  PubMed  CAS  Google Scholar 

  62. Loinger A, Lipshtat A, Balaban NQ, Biham O. Stochastic simulations of genetic switch systems. Phys Rev E. 2007;75(2 Pt 1):021904.

    Article  CAS  Google Scholar 

  63. Warren PB, ten Wolde PR. Chemical models of genetic toggle switches. J Phys Chem B. 2005;109(14):6812–23.

    Article  CAS  PubMed  Google Scholar 

  64. Gillespie DT. Exact stochastic simulation of coupled chemical-reactions. J Phys Chem. 1977;81(25):2340–61.

    Article  CAS  Google Scholar 

  65. Alves R, Savageau MA. Extending the method of mathematically controlled comparison to include numerical comparisons. Bioinformatics. 2000;16(9):786–98.

    Article  CAS  PubMed  Google Scholar 

  66. Scott M, Hwa T, Ingalls B. Deterministic characterization of stochastic genetic circuits. Proc Natl Acad Sci U S A. 2007;104(18):7402–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  67. Gillespie DT. The chemical Langevin equation. J Chem Phys. 2000;113(1):297–306.

    Article  CAS  Google Scholar 

  68. Drury KLS. Shot noise perturbations and mean first passage times between stable states. Theor Popul Biol. 2007;72(1):153–66.

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

We are grateful to Dr. Kazuhiro Maeda for helpful discussions.

Funding

SP holds the Canada Research Chair in Computational Systems Biology and acknowledges funding from the Merck Sharp Dohme program of the Faculty of Medicine, Université de Montréal. The funding bodies played no roles in the design of the study, collection, analysis, and interpretation of data, and writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualized the research: ABMSUH, HK; Developed all methodology and performed all analysis: ABMSUH; Interpreted the results: ABMSUH, HK, SP; Wrote the original draft: ABMSUH; Reviewed and edited the manuscript: ABMSUH, HK, SP. All authors have read and approved the final manuscript.

Corresponding authors

Correspondence to Hiroyuki Kurata or Sebastian Pechmann.

Ethics declarations

Ethics approval and consent to participate

This research did not discriminate against gender, sexual orientation, visible minorities, people with disabilities, or religion beliefs. No animals or human subjects were used nor harmed in this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: Text S1.

Parameter settings of the MRN-NA. Text S2. One-variable equation and noise function of the MRN-NA. Text S3. Stochastic potential profile analysis. We estimated the stochastic potential profile of the one-variable rate equation derived for the MRN and MRN-NA models as described in the Appendix F of reference [12]. Figure S1. Bifurcation analysis of the MRN model and MRN-NA models. Figure S2. Stochastic simulations for 2000 simulation steps. Figure S3. Stochastic simulation of the MRN for 20000 simulation time steps. Figure S4. Stochastic fluctuations after the signal period. Figure S5. Stochastic potential profile of the MRN and MRN-NA models.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hasan, A.B.M.S.U., Kurata, H. & Pechmann, S. Improvement of the memory function of a mutual repression network in a stochastic environment by negative autoregulation. BMC Bioinformatics 20, 734 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12859-019-3315-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12859-019-3315-2

Keywords