Apps for cells: from toys to therapies
Information processing is one of the salient features of Life. Computers and control systems were conceived and built following careful observation of nature. These efforts have generated a large body of abstract knowledge that handles information disentangled from its physical embodiment. Some of the resulting concepts such as state machines, logic circuits, memory, and compression have made full circle and are now permeating engineering efforts in the life sciences, with synthetic biologists including ourselves implementing them using biomolecules: genes, proteins, etc. The motivation behind this work is to create man-made biological devices that harness “living” information for our needs, and also improve our understanding of the natural processes as we move along. We have made large progress in recent years, from developing basic tools for programmed information processing in live cells to translating these technologies to medical and biotechnology applications. I will describe both the basic tools and their applications, focusing on an exciting possibility of creating truly personalized cancer therapies.
Systematic, reproducible simulation studies - the elephant in the room
Simulation studies are intricate processes in which model building, model refinement, and a wide variety of simulation experiments are closely intertwined to achieve the objective of the simulation study.
To (semi-)automatically support this process, the central artifacts of a simulation study, simulation models and simulation experiments, need to be made explicit and accessible. Furthermore, the relations between the artifacts as well as their context (e.g., the objectives, data sources, and requirements) have to be considered. If this information is accessible and enriched by knowledge about modeling and simulation approaches or specific application domains, guidance in conducting simulation studies, replication, interpretation and reuse of the various (intermediate) artifacts and activities are facilitated even more. The talk will show how artifact-based workflows, domain-specific languages, collecting and exploiting provenance information, and templates for simulation experiments may play together to support the user throughout conducting and even beyond an individual simulation study.
Quantifying the evolutionary dynamics of human cancers
The fundamental evolutionary parameters that define cancer evolution, including the mutation rate and selective advantage conferred by new mutations, remain poorly characterised. I will discuss how these parameters can be inferred from routinely-available cancer genome sequencing data, via statistical inference of mathematical population genetics models of clonal evolution both in individual tumours and in large tumour cohorts. We find that evidence for strong positive selection is rare across cancer types, but when selection does occur fitness increases can be as large as 50%. We explore the dynamics of negatively-selected mutations (neoantigens) in a growing tumour. These quantitative measurements of cancer evolution enable mechanistic forecasting of the future evolution of a tumour.
On the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or “close to”, data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
From structure to function: mechanisms underlying neuronal population dynamics in C. elegans
Large scale activity recordings across various model organisms recurrently reveal coordinated neuronal population dynamics, i.e. the activity of many neurons is orchestrated to produce low dimensional dynamical network states. The mechanism how neurons coordinate their activity to produce such dynamics, however, remained unknown. The nematode worm C. elegans is an ideal model to study these problems. Its nervous system has just 302 neurons and all synaptic connections between them have been fully mapped. We developed a calcium imaging approach to record the activity of nearly all neurons in its brain. Our data reveal nervous-system wide neuronal population dynamics that are reminiscent of a limit-cycle attractor. We further characterized these dynamics and found that they represent motor commands for an action sequence. In our current work, we take a combined graph-theoretical and experimental approach to understand which features in network connectivity are critical to produce population dynamics. Surprisingly, we found tat direct connection strength only weakly predicts how strong neurons interact. Conversely, higher order network features, such as rich club architecture, input-similarity and over-represented connectivity motifs seem critical for coordinated neuronal dynamics. Guided by these analyses, we perturbed global network architecture experimentally, with the result that brain wide dynamics break down into smaller units of neuronal sub-ensembles. Based on these data, we propose that neuronal population dynamics arise as a function of dynamically active neurons that are bound to global population states via these critical network connectivity features.