Intentional Biology is about the use of biology as technology. Humans have explicitly herded, farmed, used soil inoculants, and bred plants and animals for thousands of years, and now this effort is moving to the molecular level. Biology is a medium for creation. But because we don’t yet know enough to manipulate biological systems with either certainty or safety, IB is also about the science we need to do to get to that point.
The portrayal of current genetic “engineering” as precise and well defined is inappropriate today. Few genes are known quantities and the process of introducing a foreign gene into an organism produces uncertainty about both the gene’s function and the function of the DNA into which it is inserted. Genetic engineering techniques are abysmally primitive, akin to swapping random parts between random cars to produce a better car. Yet this ignorance will fade.
To be clear: the portrayal of future efforts as “intentional” is not meant as disrespect to the biologists on whose shoulders the future stands. However, progress in understanding the molecular details of biological systems and making use of that knowledge will require new experimental techniques and, more importantly, new ways of thinking about what measurements to make and how to interpret the results.
When we can successfully predict the behavior of even simple biological systems, then building new things is the next step. Therefore, to begin, the scientific foundation of an Intentional Biology is a Predictive Biology, and improving human health, resource usage, and human interaction with the world around us will be but the initial benefits of this endeavor.
By ‘predictive biology’ we mean a theoretical structure and quantitative models, wherein the system is represented at a level of resolution accessible by experiment. Current experimental methods do not provide data of sufficient resolution to build or constrain quantitative, predictive models of biological systems. As in disciplines such as physics, chemistry, and engineering, theoretical and experimental tools will progress in concert, one occasionally outstripping the other.
Is this a Good Idea?
One of the purposes of this web site is to encourage dialogue about the possibilities enabled by biological technology. There is tremendous opportunity in the development of this technology, but we have already seen that it is far too easy to accidentally release genetically modified (GM) organisms into the environment. We hope that hastening the creation of an Intentional Biology with better coordination and understanding of the underlying science will minimize such occurances. For instance, the present debate over genetically modified foods is more indicative of the poorly planned use of an immature technology rather than a failure of the technology itself. At present we simply can’t predict the effects of tinkering with a system as complex as crops and their pests. But as with the progression of every other human technology, from fire, to bridges, to computers, biological engineering will improve with time.
Achieving an Intentional Biology, and specifically a predictive biology, is, in some sense, inevitable whether we like it or not. Money will continue to flow into the NIH and into biomedical research in general. The models and experimental tools that will be developed to understand human disease and generally improve human health are exactly the technologies that will enable an intentional biology. Because it is difficult to imagine that research directed at improving human health either will be or should be stopped, we should immediately begin discussing implications of technology that will soon be developed.
The Need for Open-Source Biology
We need to consider both the endpoint, living in a world when an intentional biology exists, and the process of getting there. What promise does the endpoint hold? What are the dangers? Is it possible to get distracted on the way there, and what happens if we do? What happens if we don’t make it all the way there?
“Endpoint” is, of course, a misnomer. There will never be a point where we are satisfied, there will never be a point where science or technology is finished. However, we can envision a set of circumstances and abilities that when accomplished will suffice to define the “endpoint”. We will list some of those circumstances, some of the technological requirements, but the community as a whole will decide how to define what should be done and how to accomplish it.
The challenge is to make sure that the “endpoint” is somewhere we all want to live. Will the tools be available to everyone? How will they be used? As the tools are developed, inevitably at different rates, how can we ensure that they are not used improperly, or that they result in more good than harm? What will happen during the time period when we can radically manipulate biological systems, but do not have the technology to remedy mistakes? (Note that this describes our current situation.) It is this transition period that requires the most vigilance. We suggest also that we endeavor to move beyond the transition as quickly as possible. Thus the importance of Open-Source Biology. What better way to keep track of what is going on than for everybody to know? There is an existing example of this kind of effort to learn from, the open source software movement. As Eric Raymond has written, it is clear from observing the hacker and cracker communities that a knowledge base increases markedly faster when people share information. Stated this way it is an obvious point, but both the academic and industrial communities seem confused about it today.
Models, Simulations, and Design Tools
Models of large numbers of components are different than models of small numbers of components in both physical and computational structure.
Models are representations of our knowledge of a system. The behavior of some models, such as a Newtonian model of 2 masses connected by a spring, are understandable without significant computational aid. In contrast, describing the motion of a pendulum with two hinges along its length requires numerical simulation. Of direct interest to people trying to understand biological systems is the yeast mating signal transduction pathway. A model of the signal transduction portion of this pathway includes 17 proteins and, because experimental techniques do not yet exist to sufficiently constrain the model, ~25,000 potential interactions between those proteins. Understanding the molecular details of this system requires numerical simulation of its behavior.
When models of biological systems develop to the point they can be used to predict the effects of perturbations, they will become de facto design tools for systems of the constituent components. A design tool developed from a predictive model will provide a basis not just for building new systems out of components already described in the model, but will also provide a “device physics” description of the components. An understanding of biological components on this level is critical for building new components with new functions either through artificial selection, tinkering, or rational design.
Building technology based upon biological components requires the ability to get information into and out of sytems of those components. In the near term, this biological input/output (Bio-I/O) capability will likely utilize electrical manipulation of cells and their biochemistry for input, and changes in engineered optical properties to read the state of system. These technologies will provide a toolbox of test and measurement techniques for building models and for the eventual design process.
Rational Design of Biological Systems
Deterministic vs. Stochastic Engineering
Conceptually, we imagine three approaches to building systems out of biological components. The first is to tinker, trying out combinations of parts to see what happens using experience and iteration. The second is to let evolution take its course, with a guiding hand from educated humans. The third is to develop a framework for rational design. Broadly speaking, biological engineering to date has employed the first and, very recently, the second approach, but not the third. While these first two methods have produced interesting examples of useful proteins, the construction of multicomponent systems, for example, sensors, or information processing systems, will likely require rational design.
Distributed Biological Manufacturing
Whereas most manufacturing today is highly centralized and materials are transported considerable distances throughout the assembly process, in the coming decades human industry will use distributed and renewable manufacturing based upon biology. Renewable manufacturing means that biology will be used to produce many of the physical things we use every day. In early implementation, the organism of choice will likely be yeast or a bacterium. The physical infrastructure for this type of manufacturing is inherently flexible: it is essentially the vats, pumps, and fluid handling capacity found in any brewery. Production runs for different products would merely involve seeding a vat with a yeast strain containing the appropriate genetic instructions and then providing raw materials. It is not clear how complex the fabrication task can be, and there will certainly be some materials and tasks which are better suited to other manufacturing techniques, but biology is capable of fabrication feats not emulatable by any current or envisioned human technology. In some ways, this scheme sounds a bit like Eric Drexler’s nanotechnological assemblers, except that we already have functional nanotechnology — it’s called biology.
The transformation to an economy based on biological manufacturing will occur as technical manipulations become easier with practice and through a proliferation of workers with the appropriate skills. Biological engineering will proceed from profession, to vocation, to avocation, because the availability of inexpensive, quality DNA sequencing and synthesis equipment will allow participation by anyone who wants to learn the details. In a few decades, following the fine tradition of hacking automobiles and computers, garage biology hacking will be well underway.
Considerable information is already available on how to manipulate and analyze DNA in the kitchen. A recent Scientific American Amateur Scientist column provided instructions for amplifying DNA through the polymerase chain reaction (PCR), and a previous column concerned analyzing DNA samples using homemade electrophoresis equipment. The discussion was immediately picked up in a slashdot thread where participants provided tips for improving the yield of the PCR process. More detailed, technical information can be found in any university biology library in Current Protocols in Molecular Biology, which contains instructions on how to perform virtually every task needed in modern molecular biology. This printed compendium will no doubt soon join the myriad resources online maintained by universities and government agencies, thereby becoming all the more accessible. Open-source biology is already becoming a reality.
As the “coding” infrastructure for understanding, troubleshooting, and, ultimately, designing biology develops, DNA sequencers and synthesizers will become less expensive, faster, and ever simpler to use. These critical technologies will first move from academic labs and large biotechnology companies to small businesses, and eventually to the home garage and kitchen. Many standard laboratory techniques that once required a doctorate’s worth of knowledge and experience to execute correctly are now used by undergraduates with kits containing color-coded bottles of reagents. The recipes are easy to follow. This change in technology represents a democratization of sorts, and it illustrates the likely changes in labor structure that will accompany the blossoming of biological technology.
The course of labor in biological technology can be charted by looking at the experience of the computer and internet industries. Many start-up companies in silicon valley have become contract engineering efforts, funded by venture capital, where workers sign on with the expectation that the company will be sold within a few years, whereupon they will find a new assignment. The leading edge of the biological technology revolution could soon look the same. However, unlike for today’s integrated circuits, where manufacturing infrastructure costs have now reached upwards of 1 billion dollars per facility, the infrastructure costs for renewable biological manufacturing will continue to decline. Life, and all the evolutionarily developed technology it utilizes, operates at essentially room temperature, fueled by sugars. Renewable, biological manufacturing will take place anywhere someone wants to set up a vat or plant a seed.
Distributed biological manufacturing will be all the more flexible because the comodity in biotechnology is today becoming information rather than things. While it is still often necessary to exchange samples through the mail, the genomics industry has already begun to derive income from selling solely information about gene expression. In a few decades it will be the genomic sequence that is sent between labs, there to be re-synthesized and expressed as needed. It is already possible to synthesize sufficient DNA to build a bacterial genome from scratch in a few weeks. Over the coming decades that time will be reduced to days, and then to hours.
Intentional Biology and Development
As open-source biological manufacturing spreads, it will be adopted quickly in less developed economies to bypass the first world’s investment in industrial infrastructure. Given the already stressed state of natural resources throughout much of the developing world, it will not be possible for many of those countries to attain first-world standards of living with industrial infrastructure as wasteful as that of the United States. The developing world simply cannot afford industrial and energy inefficiency. A short cut is to follow the example of the growing wireless-only communications infrastructure in Africa and skip building systems to transport power and goods. It is already clear that distributed power generation will soon become more efficient than centralized systems. Distributed manufacturing based upon local resources will save transportation costs, provide for simpler customization, require less infrastructure investment, and as a result will likely cost less than centralized manufacturing.