Blogs that need to be edited

A Bag Full of Phones

Remember when we all had flip phones? It was only a few years ago, but it seems like a different age. Movies and old TV shows can be “carbon-dated” by the cellular technology they use. Even if the actors wear modern hairstyles and clothing, it’s easy to tell the show is “pre-iPhone” by observing their cell phones.

I remember dreaming about a “convergence device”. At the time, a Blackberry was the best example. It combined your email, phone, and pager. It had a crude browser, but it wasn’t very good. Somewhere around summer of 2007, my flip phone died and I decided to look at the much-hyped iPhone. I went to an AT&T store, picked up the device and could not believe how incredible the experience was. I immediately dumped Verizon and became an AT&T customer. It was that good.

The reason Apple was so successful with the iPhone is because they created a platform (OS) and an App Store that developers could fill with a never-ending stream of useful applications. Before long, you could retire your GPS, your camera, your MP3 player, and a myriad of other single-purpose devices.

Recently, I’ve been comparing this experience to the current state-of-the-art software offerings for enterprises. If you are an IT manager in a typical life sciences company, you’re expected to build a slew of single-purpose IT systems sporting catchy acronyms such as ERP, LIMS, CRM, ELN, CAPA, LMS, EDMS, and on and on. All these systems are composed of the same basic components; a database, a middleware layer, and an application server. But we’ve been conditioned to think that standing up single-purpose computer systems is state-of-the-art.

When you build single-purpose systems, you waste resources by creating redundant technology stacks. But more insidiously, you strand information. Or even worse, you replicate information in silos.

Wouldn’t it be great if your company could invest in a multi-purpose platform that addressed a wide variety of information management needs? How about being able to select applications from a catalogue? This will become the new state-of-the-art in the next few years. Just as the mobile phone market was upended by Apple, there will be a similar upheaval in enterprise computing once the advantages are better understood.

In the near future, IT departments won’t build dozens of technology stacks; all devoted to single purpose functions. That makes as much sense as asking you to carry around a bag full of phones.

Image courtesy of

Get the Jump on your Life Science Competitors

I’m intrigued by the velocity of web based companies.  I just read an account of Instagram.  They were founded in late 2010 and sold themselves to Facebook a mere 551 days later for $1B.  Companies that compete on the web have to move quickly, adapt to customer preference, and execute pivots in a matter of weeks.

In my industry, Life Sciences, technology and techniques are developed more slowly.  This is understandable.  We’re developing drugs and products that directly affect patient health.  And Life Sciences companies must abide by FDA regulations that ensure product (and personnel) safety. Read more

Jailbreak : Get Your Organization Out of “Data Jail”

I just read an article in the Harvard Business Review by Aaron Levie entitled How to Compete When IT Is Abundant. He posits that “We’re in the early days of yet another seismic shift in IT, this time driven by mobile devices, the cloud, and a demand for technology experiences that match the simplicity of the consumer world.

Agreed. People are starting to wonder why the clunky software they use at the office can’t be more like web software. Read more

Free your QA Department from 100% inspection

Life Sciences companies are regulated by the FDA and must abide by the Good Manufacturing Practices (GMP), Good Clinical Practices (GCP), and Good Laboratory Practices (GLP). A cornerstone of these regulations is to establish an internal oversight function; typically called Quality Assurance or QA. QA has a number of responsibilities within a typical company including the review of execution records to ensure completeness and consistency. If the company uses paper execution records or captures raw data in a spreadsheet, this review can be onerous. It is not atypical for records review to be the rate limiting factor in a regulated factory or lab.

In a recent project, our customer was automating an immunohistochemistry process. The new design used a liquid handling robot to pipette samples into 96 well plates using a serial dilution scheme. The plates were then loaded onto a plate reader where the Optical Density (OD) of each sample was measured. These data were then transferred into a locked-down spreadsheet template for reporting.

When I interviewed the lab manager about expected labor savings, I was surprised to hear that all the robotics and automation were not expected to yield efficiencies. It turns out that people are pretty good at performing sample handling tasks. According to the lab manager, the efficiency gain would come from the integrated bar code reader on the liquid handling robot. The bar code reader provided an audit trail that could be inspected by software to ensure samples were diluted, loaded, and measured correctly. All the efficiency gains came from relieving QA of the need to inspect the execution records.

Time and time again, we discover that QA has to perform a painstaking review of all execution records before a product can be used in a regulated process or released for sale. The problem can be particularly acute in the microbiology areas; where clean room and water system sampling is performed. These tasks can generate an avalanche of samples. And you will often find QA buried under the pile. It is common for product release to be held up because the water has not been released by QA. Talk about the tail wagging the dog!

Wouldn’t it be great if your QA department didn’t have to perform 100% inspection? What if they could focus on the exceptions? Imagine how your company’s operations could be transformed. Imagine how turn-around time could be reduced. And this would mean dramatically increased throughput for your lab or factory.

So how should you approach such a project? At BioIT Solutions, we start at the tail end of the process. We look at the final report and identify the data elements that comprise that report. Then, we follow each data element back to its source; paying particular attention to any transformations. We identify the critical control points; the points where the data can be perturbed. And we establish controls; either technical or procedural around each one of the critical control points.

We aim to establish an intuitive, navigable supply chain of information. A supply chain that allows scientists and regulators to follow the reported results back to the raw data and that discloses the experimental conditions. By establishing an information supply chain, you can free your QA department from 100% inspection and unleash the potential of your company.

A Blatant Advertisement for the BioIT Platform

I’ve heard people say that you shouldn’t overtly blog about your product or service because it might turn people off. They might consider it spam and discount your views and opinions. Well, for this blog post, I’m not going to heed their advice. I am going to tell you why I think our product is remarkable. Here goes … It saves you money. Read more

Think Horizontally

In a previous blog post, I suggested that the only thing of lasting value that IT professionals create is high quality data.  But how exactly, do you create such as asset?  In this article, I’ll explain our method.

The strategy is simple.  Get as many eyeballs looking at your data set as possible.  Try to get folks from multiple functions (i.e. accounting, operations, sales, etc.), consuming the data.  Try to get people at different levels of the organization, (i.e. line workers, supervisors, managers, executives) to rely on the fidelity of the data. Read more

The Myth of the Waterfall

There are many Software Development Lifecycles, but probably the most ubiquitous is the Waterfall Methodology.  I first encountered it as a young software engineer in the form of MIL-STD-2167A.  The waterfall method instructs you to develop a detailed requirements specification, followed by a design specification, and then on to software implementation, and testing.  This method is appropriate and necessary when you’re building big systems, especially when components are being sourced from multiple vendors.  But increasingly, this technique is causing software projects to fail.

The fundamental flaw in Waterfall is the myth that you know “the requirements” up front.  This is difficult in many industries, but in our industry; life sciences, it’s virtually impossible.  Scientists experiment.  They form a hypothesis, design an experimental method, capture data and make inferences about whether the data support or negate their original thesis.  And then they do it again.  They let real world experience guide future experiments.  We have adopted this technique when building scientific software and it has yielded terrific results and high affinity with our customers.  We call this technique Functional Prototyping.

But in order to build computer systems using this approach, you must trade “certainty” for “ambiguity”.  You must be comfortable proceeding without knowing exactly what you will build.  You must admit that you don’t “know” all the requirements. You must trust that, together with your customer, you will make the appropriate choices between functionality, organization impact, and cost.

At BioIT Solutions, we use functional prototyping as a requirements gathering tool.  We can rapidly configure a working prototype that captures or analyzes actual data.  The customer can “see and feel” the actual system in-situ. The “real requirements” are revealed with this “hands on” approach.

Contrast this with the typical waterfall project that expects the customer to read a requirements specification to determine completeness.  We have been “held hostage” by inappropriate, incomplete, or just plain wrong requirements once they have been approved in a requirements spec.  Even when the customer agrees they didn’t understand the document, we often needed slow, costly change orders to enact the desired change.

If you’re in the life sciences industry and have been underwhelmed by “Big Bang” software projects that have failed to deliver on their promises, you may want to examine the methodology, not just the technology.   You just might find the myth of the waterfall was the real culprit.

Image courtesy of

Bio Cloud Operating System

Here is an introduction to how we have implemented a new vision for Biotech Informatics:

1. Combining a classical standard architecture with advanced networking tools allow for powerful, flexible, and very efficient deployment of the most complex workflows. This is crucial when workflows are still under technology development, a perpetual situation in some active R&D environments.

2. Roles-based interaction™ makes complex and collaborative business processes both controllable and observable. The resulting work environment is uncluttered and simple for each user, while process managers have custom views of all tasks and can adjust and control.

3. Innovative Apps capture subject matter expertise and allow for streamlining the enterprise, as opposed to clustering software from multiple vendors – many without biotech expertise. The built in SME in BioIT Apps save months in deployment and allow for hands-on process development and input by all stakeholders.

Our model can be graphically summarized:

Read more