Adam Czajczyk
Business editor, Biotechnologia.pl


Since 1919 biotechnology has come a long way. There’s no way that Karl Ereky, who used the term for the first time, had ever imagined all the things we are capable of achieving today. Modern times require modern technologies and so here it comes: the cloud computing – often simply referred to as just “the cloud”.

The problem of each and every extensive science research and experiment is an enormous amount of data generated. The more advanced knowledge we are trying to gain, the more information is being produced. And as almost all of it is meant to be analyzed in a variety of ways, it quickly becomes a real challenge for the scientist who expects some reliable results instead of the long process of transforming, calculating or comparing data.

Rome wasn’t build in a day
Since the Human Genome Project (HGP) completion in 2003 the genome sequencing has become very quick and relatively cheap. Last year was also a year of some quite impressive announcements from the hardware companies that claim to offer machines capable of sequencing the entire human genome within the timeframe of one day and for less than $1000. However, this technology doesn’t solve one major problem: extraction of any meaningful information from the data it provides can still take weeks or even months.

The genetic researches are probably the most costly kind of biotechnological experiments in the field of computational power needed. It’s almost impossible for a human being to run through all the statistical analysis without using the powerful computer equipped with some highly specialized software. And even if it was, it would probably take years, not to mention an unacceptable error level.

One of the biggest innovation of the last years is called “optogenetics”. So far, it has been used with great success for various reasons. Still, the technique itself is on its early development stage and needs a lot of attention and research to become really useful and popular. Although optogenetics involves quite a wide range of different genetic manipulations by application of light and some chemistry into the living organism, there are many experiments that use contemporary electronic devices for establishing a reliable two-way connection. As an example, take a look at the American computer-brain interface by Kendall Research. This tiny device can transmit data between the brain of the mouse and the computer. Once again, the amount of data generated this way is enormous and with the technology expansion it will grow exponentially.

That’s not all. Modern biotechnology is also about food and medicine. Would it be possible to develop new drugs without extensive simulations? The answer is simple: no. Either food and drugs require a lot of computer-powered analysis and calculations. Synthesis of new compounds is also based on what the scientific computer can or cannot calculate and how much time it would take.

On the other hand, we’re witnessing some impressive achievements in the field of transplantology and prosthetics. We’re becoming quite used to the thoughts of artificial organs in our bodies. But there’s a strict relationship between these things, electronics and information technology. None of the modern prosthesis could be build without the computerized assistance. What’s more, it probably couldn’t even function properly!

There’s another great example. At the end of the year 2013 Dr Leslie Saxon from the University of Southern California is going to publicly launch the new social website at everyheartbeat.org. The primary focus of this project is to gather as much medical data about our beating hearts as possible. At this very moment the team has already over 20 million reports from all over the world in its collection. All these reports come from various heart devices such as pacemakers equipped with microchips and WiFi transmitters. Also a lot of them were recorded live, eg.  during the life-saving interventions.

The goal of the project is to collect this information and,  using the advanced statistics, mathematics and medical knowledge, build the system which might be able to predict strokes, heart attacks or even genetic diseases. It is even hard to imagine how much data-storage space is needed and how many calculations will have to be done constantly after the project starts.

So, as we can see, the modern biotechnology is not only about biology, chemistry or medicine only. The ties between life and computers are getting stronger and stronger everyday. Some IT specialists say that we have crossed the border that divides nature from artificial technology. That being said means it’s about time for us to reach beneath the traditional tools and ways of doing things just to speed up the development and lower costs of biotechnological research.

It’s all about money
Huge data amounts and the hunger for complicated analysis and calculations makes contemporary biotechnology strictly depending on the computer efficiency and performance. It’s hard to imagine a modern laboratory without some advanced digital devices. From the simplest microscope, through the office equipment, up to the large industrial-grade scientific computer-grids, everything relies on the computer power.

The crucial part of it all is that the data-storage space is always limited. And so is the time.

Some experiments are time-related, some information just changes in time, most of the financial grants for scientific teams are also within certain timeframe. It means that if you’re unable to conduct those scheduled tasks in time, whether it’s just filling the report or completing the calculation, your research is simply doomed to fail!

The consequence is quite obvious: the more advanced research it is, the better IT system it needs. This leads to a straightforward conclusion – some new investments in an IT infrastructure are necessary to successfully complete the experiment. While the contemporary electronics is getting cheaper, it’s still too expensive for most of the labs (especially those academic) to keep up to date.

The biggest operating - yet still under construction – supercomputer in the world is build out of 18 000 microchips and is worth billions of dollars. It’s owned by the Oak Ridge National Laboratory in the United States and its main purpose is to serve as a powerful scientific tool. Some of the researches include biotechnology.

Such a machine is completely out of reach for the most of the labs. ORNL is founded by the military funds of US government and it’s probably the only reason that can explain how the lab could  have spent so much money.

In Poland we also have another problem. Most of our academic labs are poorly financed by their alma mater, so there’s no possibility to expand IT infrastructure every suitable time. We have to use what we get. There might be a solution though. If there’s a way to stay up-to-date with the computational power and lower the costs, why not use it?

The cloud
Basicly the term “cloud computing” refers to delivery of storage capacity and computational power as a service. It means that we do not need the state-of-the-art personal computer to conduct any advanced tasks. Everything is leased to us on the network. A simple example is a popular accounting software.

Typically the company buys expensive applications and hardware, set-up some special servers and employ IT technician to take care of all that. The cloud takes it all the way down to a simple operation such as company registration on a particular website. That’s all. You simply use it and pay-as-you-go.

Everything that’s “in the cloud” is scalable and, let’s say, fluid. Do you need more data-storage space? No problem, just slide the slider on the website. Do you need more power for your calculations? No problem, just switch the switch on the website. That’s all. No up-front payments, no long-term commitments, no worries about hardware issues.

From the technical point of view, the cloud is a solution that can consist of many different computers, applications and layers. The user doesn’t have to know and doesn’t have to care. It just works.

So, what’s the big buzz about?
For a laboratory the cloud computing is possibly the best solution. It makes all the IT costs significantly lower and let the scientists do their work instead of running an uneven fight with the unreliable computer hardware and not-so-willing-to-invest administration.

It also solves the problem of those enormous amounts of data mentioned before. Storage and processing of the information is easy and relatively cheap with the cloud. That’s because instead of buying the expensive hardware you use only what you need and exactly when you need. And with prices starting from a few cents for an hour it’s really worth taking a look.

Anyway, the cloud is now conquering the internet. That’s about time to apply it into the laboratory practice. The solution can really speed-up the research and make experiments easier and cheaper.

The only risk is that there may come time when you’ll have to point fingers on your fellow scientist and say: “Hey, you, get out of my cloud!”.