Blog posts that are not used

BioIT Solutions moves to RagingWire

Last Thursday, I visited the BioIT Solutions Data Center at RagingWire in Ashburn, Virginia. RagingWire is a California based company that has established a presence in the thriving Internet corridor in Ashburn, Virginia. We recently decided to move our computing infrastructure to RagingWire and wanted to see how the facility

was built and learn a little about their operations. Our Account Manager, Don Shopp was kind enough to give us a tour of the facility and pointed out some of the features that set them apart.

Carrier Neutral
One of the big draws was the fact they are not tied to one particular service provider. The carrier data centers derive their revenue from bandwidth and make the prospect of traffic growth quite expensive. By opening up the data center to a number of carriers, you promote competition and increase resiliency.
Read more

The Cloud: Don’t Miss the Real Opportunity

The Cloud is becoming more accepted by business. Provisioning speed, combined with reduced startup costs and demand elasticity are now well understood by IT departments. And this is tipping the balance towards cloud adoption.

It is quite painless to stand up a host on Amazon’s Elastic Compute Cloud (EC2). And one can then install the licensed software that used to live in the company’s data center. Alternatively, most software vendors are now offering off-premise installations so that data center build-out can be avoided.

But thinking of the cloud as a cost savings play is missing the point. By installing your company’s software on the Internet, you’ve opened a whole new set of possibilities that didn’t exist when it was trapped inside the LAN. The software can now become a collaboration portal; an on-ramp for customers that expresses your company’s brand and increases customer affinity. Such a portal can provide self-service which affords convenience and lessens the need for call centers and other customer support. Read more

Does your System Own the Results ?

The Life Sciences business is an Information Business.  We sell small containers of material at a considerable price.  But without supporting information representing the product’s identity, purity and potency, we are unable to sell it.  So in essence, Information IS our Product. That’s why companies spend so much time and energy ensuring that the information is correct.  Usually, there is a document or report that attests to the authenticity of the product or service.  In the pharmaceutical industry, this document is called the Certificate of Analysis (COA). In the diagnostics industry, it’s called a Test Report (TR).  In both cases, the company must provide strong traceability from test result back through the testing process so they can prove the veracity of the results.

During system implementations, we generally recommend that final report generation be part of the initial project.  For instance, when I work with a customer on a pharmaceutical LIMS, I always steer them towards COA creation in the first version.  Once the system “owns” the results, it becomes possible to perform Statistical Process Control (SPC) on the results and catch trends early.  If the system doesn’t own the actual results, it undermines its usefulness.  A system that does not contain the results can start to atrophy because the staff goes elsewhere for answers.

The other reason we like to produce the final report is because it represents our customer’s product.  If possible, we like to provide direct access to the final report using a portal.  When we provide direct report access, customer satisfaction increases and the customer support workload goes down; freeing up staff for science.

One of my clients used to fax Test Reports to a physician’s office at the end of each business day.  This entailed printing the report to paper and feeding the fax machine for up to an hour.  Then we instituted a secure daily email that had links back to the online final reports.  Now, the physician’s office receives the email and can self-serve.  And my client’s staff can work on higher value tasks.  This would not have been possible if the system didn’t own the results.

So when you are designing your next system, don’t discount the value of final report generation.  I encourage you to include this feature in the initial version.  Because if your system can’t generate the final report, it doesn’t “own” the results.

Image courtesy of nokhoog_buchachon and freedigitalphotos.net

This Week’s Wake-Up Call

When it comes to the web, hosted solutions, and (dare I say it) “the Cloud”, I’m all in.  Unlike some of my friends and colleagues who are suspicious of moving their data offsite, I take the opposite stance.  I use Gmail for my mail, Dropbox for shared files, and even a web based password store.  Using these tools, I can move seamlessly from device to device without having to synchronize anything.  I can even lose a hard drive and not miss a beat. Read more

Where is your Lab’s Informatics Supply Chain ?

Human Genome Sciences (HGS), one of Maryland’s Biotech anchors, was recently purchased by Glaxo SmithKline (GSK).  HGS was purchased because of its drug pipeline but, not so long ago, it was renowned for sequencing human genes and amassing a treasure trove of Intellectual Property.  I had the great honor of being part of the HGS informatics team during that time.

In the early days, we were confronted by a torrent of gene sequences poring forth from a room full of ABI sequencers.  It was important to capture and catalog each gene sequence; making sure it was associated with the correct tissue, disease, and developmental stage.  But the sequencing instrument’s control software wasn’t designed to enforce our indexing rules.  So we had to provide that control.

This problem is not unique.  We continually encounter it when building scientific pipelines. This is because instrument builders don’t understand they are part of an informatics supply chain.  They envision white coat-clad scientists meticulously typing instructions into their instrument’s control software.  These imagined scientists wait expectantly for the output and process it manually when it arrives.  But if the instrument is part of an automated pipeline, it often lacks the necessary control features.

One of the first things we do when automating scientific processes is to establish the informatics supply chain.  As mentioned above, this often entails augmenting the participating instruments’ control software; either with technical or procedural controls.  Once established, this supply chain acts as a “conveyer belt”; ensuring high quality meta-data is delivered along with the scientific raw data and the interpreted results.  Furthermore, the supply chain must be navigable.  Scientists must be able to trace back through the process to understand the precise conditions, materials, and equipment used to generate the data.

It is surprising how many labs neglect this simple, but necessary step.  It is especially common when science is being performed at bench-top scale.  Many bench-top labs are neglected by the IT department.  They are left to fend for themselves with Excel, PowerPoint, Email, and corporate shares (and of course paper lab notebooks).  Although good practices can be established by individual scientists, they tend to be local and highly manual.  They tend to generate data that is not well suited for automated interpretation (parsing).

If you are building software solutions in the life sciences industry, or are a scientist in a bench-top lab who wants to prepare for automation, you should consider a couple of practices that can establish your informatics supply chain.

First, design a consistent coding scheme that can be used during sample and reagent accessioning.  The coding scheme may contain embedded information or it may just be an anonymous number.  Just make it consistent and unique.

Second, be disciplined about using this coding scheme when loading the instrument.  The identifiers you load into the instrument’s control software will be embedded into the output files.  So using the agreed upon coding scheme will greatly enhance traceability.  If possible, avoid manually typing the identifiers because it’s inevitable that people will transpose and substitute digits.  Instead, try to programmatically insert a “sample manifest” into the instrument’s control software.  This can be accomplished by most commercial LIMS systems.  But a point solution can perform this function as well.

By taking these steps, you can increase the fidelity of your scientific data and provide context for interpretation.  And when the “Mother of All Informatics Systems” arrives, your data will be able to snap-in to the framework with ease.

When I examine laboratory processes, I always look for a way to tie information together in a seamless, navigable way.  If, at some point, we get to work together, I’m probably going to ask you …  “Where is your lab’s informatics supply chain?”

Image courtesy of Victor Habbic and freedigitalphotos.net

Turning the Steering Wheel Around

Business moves fast. Companies need to react quickly to new opportunities and their business software needs to adapt just as quickly. Waiting for the IT guy to apply changes can really slow a business down. So if you’re creating software for business, you should strive to make your customers as self-sufficient as possible.

When considering automating a process, we try to forecast the types of changes that will happen frequently. And we make adapting to those changes easy for the customer to perform themselves. We call this “Turning the Steering Wheel Around”.

User account management is a prime example of a frequently occurring request. Managing controlled vocabularies and pick lists is also a task we’ve successfully delegated. Usually, there is a “Super User” to whom we provide elevated rights. The super user is often on the operations side and is involved in day to day activities. As such, they have a thorough understanding of the system and its use within the business. By providing access to audit logs and error logs, the super users are often able to perform “Tier 1” support to their own organization.

Another set of ongoing requirements revolves around reporting. We find that reporting requirements continue to evolve, long after the transactional part of the system stabilizes. We routinely use a number of industry standard reporting tools in the BioIT Software platform, including Crystal Reports, Siberix Report Writer, XSLT Transforms, HTML reports, and even Microsoft Office. Some of these tools are essentially programming and are therefore not good candidates for delegation. However, Crystal Reports and even Microsoft Word can be effectively used by super users in certain cases.

So if you are a software developer building solutions for business, I encourage you to think about empowering your customer. By Turning the Steering Wheel Around, the customer will be in the driver’s seat, not you.

Image courtesy of FreeDigitalPhotos.net

Theory of Constraints: Forgotten Process Improvement Method

You can’t work in a modern corporation without being exposed to the two prevailing process improvement methodologies; Six Sigma and Lean Manufacturing. Six Sigma focuses on reducing variability in an effort to produce consistent outcomes, (i.e. uniform widgets). Lean Manufacturing aims to reduce waste; thereby increasing productivity and throughput.

Although not formally trained in either methodology, I’ve been in and around companies with active programs in both and have witnessed their impact. So I was surprised to find a different, unknown process improvement technique; The Theory of Constraints. At least it was unknown to me.

The Theory of Constraints (TOC) was first put forth by Eliyahu Goldratt in his book, The Goal. It recognizes that in any process, there is a constraining activity. The activity with a growing backlog is the likely culprit. TOC asks you to think of the end-to-end process like a chain. And the weakest link in that chain determines the overall process throughput. By concentrating on improving the capacity of the constraining activity (or weakest link), the system’s overall throughput can be increased.

This approach seems intuitively satisfying. And I’ve been wondering why it didn’t make the leap to the corporate world. In an article entitled How to Compare Six Sigma, Lean, and the Theory of Constraints”by Dave Nave from 2002 , the author puts forth some organizational differences that may be able to explain why. In essence, Mr. Nave explains that TOC is favored by traditional hierarchical organizations that have a clear delineation between workers and management. TOC is driven by management without much input or influence from the workers. I can imagine that this didn’t appeal to modern, progressive organizations that are trying to flatten their org charts.

The aspect of TOC that appeals to me however, is how the process is viewed as a whole. I liken TOC to the work a metropolitan traffic planner might undertake by flying over the city in a helicopter. She may observe various bottlenecks and choke points during rush hour and direct the road crews to tune traffic light patterns or (in some cases) add new lanes. Over time, she could see the impact of these adjustments and observe the bottlenecks moving to new locations or dissipating altogether.

In contrast, I’ve observed many Lean activities that are narrowly focused. They may improve efficiency locally, without fully considering the overall organization. That’s like adding lanes to a highway without reducing overall commute time.

TOC is an interesting way to think about process improvement and it deserves to be considered along with Lean and Six Sigma. So I’m going to add TOC to my professional toolkit. How about you?

Image courtesy of ShutterStock.com

Transforming the Local Area Network to a new enterprise model

The traditional enterprise software model thinks of the business as a single structure, a building, within which you wire different functions together to run the enterprise. When deployed, these systems behave like the “Heavy Iron” tagline that they have earned over the years. Inflexible and difficult to manage, they typically require a maintenance crew and over time become layered with new requirements, eventually producing a bewildering blob.

Today everyone expects the same streamlined and functional environment that they enjoy when using their favorite web interactions, for example a browsing and shopping session at Amazon. After the purchase, you have a seamless view of how your merchandise finds its way through a complex workflow to your door. However, at work you end up traveling Read more