Skip to main content

Download the Ultimate Clinical Trial Tracking Checklist

Life Sciences and Serverless Tech. A Step Too Far?

By The TransPerfect Life Sciences Team

TMF Expert Insights

Thursday, February 15, 2018 | 7:29 PM

Life Sciences and Serverless Tech. A Step Too Far?

After attending a recent developer conference for one of our vendors, the topic of serverless technology came up again and again. It is an innovation whose time has almost come, as more and more life sciences organizations start to rely on the powerful Cloud services that are now available, running the gamut from NoSQL Databases, Cloud Caching, Service Oriented Architectures and, of course, the hot topic of Machine Learning.

Serverless technology is the latest fundamental shift in Cloud Platform as a Service (PaaS) capabilities. An attendee at the developer conference defined it as "An application service that runs on the Cloud without paying for a specific server's uptime." It’s more than just a new pricing model; Serverless is a move to an even more granular way to pay for computing resources as utilities, shifting from server uptime to CPU cycles, memory, and network.

Of course, servers will always exist behind the scenes, but they are brought up and brought down on an as needed basis. The obvious examples here are Amazon Web Service's Lambda, or Microsoft Azure's Functions. Similarly, other innovations exist as shared services that are provided as temporary slices of larger, multi-tenant systems. Examples of these include Azure's Table Storage, and many Amazon messaging services such as Simple Storage Service (S3), Simple Queue Service, Simple Notification Service, and related capabilities. They are true PaaS services, operating as components of a much larger, well-oiled machine.

How do these important innovations affect life sciences? As the Cloud is slowly adopted by life sciences organizations, the speed bumps have been frequent; the industry has rather special requirements that must be met. They are as follows:

  • Security: Is the Cloud vendor's security really better than your own security? How do you ensure your system is defended and the software is patched to prevent data theft or damage, if it is not even clear what system is actually executing your code?
  • Data Privacy: Where is your data being housed, and how do you ensure that sensitive information stays secure? What laws around data privacy and management are applicable when data is replicated across regions automatically?
  • Infrastructure and Operations: How do you create your Cloud environment, and how do you test and qualify it? Is it really any different than testing your application? And how do you ensure a quality operation when much of the burden is shifting to the cloud vendor?
  • Software Quality: How do you test and validate your entire application, and once validated, how does it stay that way? Particularly if it is running a function that is part of a larger application over which you have little control?
  • Vendor Audits: In this increasingly complex and co-dependent software supply chain, how do you enforce quality up and down the chain with your vendors? What if the vendor does not allow an audit of many of the services in use?

This is by no means a comprehensive list. As you might imagine, serverless technology ratchets up this debate to a higher degree of complexity. In this scenario none of the hardware is really yours, or your vendor's. If the implementation of your vendor's hosting environment is entirely serverless, the only thing you can be said to 'own' is your database, and if this is installed on ElastiCache or DynamoDB for example, it may exist across multiple availability zones or regions on hardware that is not at all controlled by you—even in the virtual sense. If the service in question is implemented as Azure Functions running against Azure Tables for example, you are running with a portion of CPU cycles on virtualized hardware that is never yours.

There's a bright side to all this, of course. These serverless technologies allow companies to do things quickly that used to require entire IT divisions, or major consulting projects. Need to perform big data analysis on a dataset? Just call an API to instantiate those services. Need to leverage machine learning for a critical business use case? You can do it with a single developer. It is tempting to be sure.

These are difficult questions in life sciences IT and Quality, with no definitive answers. Suffice to say that this is a topic that is important to life sciences. With the final responsibility remaining with the regulated company, serverless computing continues to push the envelope of risk, and sponsors and their vendors must make very careful decisions when defining the development, hosting, and operations for their application services.

At TransPerfect Life Sciences Solutions, we help Sponsors and CROs with their e-clinical processes. No matter what, we keep your eTMF data and documents secure and compliant, and we will always ensure the confidentiality, integrity, and availability of our service.

Back to Blog