Although an entomologist can see a certain beauty in the most annoying insect, there is nothing attractive about a computer “bug”. These failures in software or hardware can sometimes be just a nuisance, but at other times they can harm or kill, especially with clinical algorithms.

All too often, bugs in healthcare software or devices occur. 1.3 million people in the US are injured annually by medication errors alone. Programs misplace decimal points of a drug can cause a patient to be exposed to a lethal dose. Misunderstandings about which units of measurement are being used can cause catastrophic failures. Programmable pumps fail. Patients receive lethal doses of radiation. Because there’s a long list of serious problems that have occurred, the FDA wants to regulate certain types of software as a medical device.

When these situations are analyzed one or more underlying problems can be found. Some problems have been traced back to:

  1. A failure in programming
  2. A failure to communicate
  3. A failure to anticipate certain conditions
  4. A failure to adequately test
  5. Modifications to a program made after an initial release
  6. A failure to make modifications
  7. A failure to test after a change or migration to new hardware or software
  8. A failure to adequately train users
  9. Problems with the user interface

When dealing with a large and complex program it is very easy for something unforeseen to happen, especially when poorly documented code is maintained by programmers who were not involved in the original development.

Can we prevent serious bugs from happening? Good manufacturing practices for software and hardware with periodic testing can reduce the occurrence of bugs significantly. However, it can be very difficult and expensive to eliminate the risk initially and to prevent problems from appearing later.

Often a bug is only one factor in a chain of events. Many disasters involve a sequence of missteps, with the outcome avoided if any one of the steps had been different. In a poorly-controlled process the dangers may be much greater than generally realized.

As algorithms become more important in healthcare the clinician will be essential in preventing catastrophes caused by computer bugs. It is often the astute clinician who realizes that something is awry. On the other hand, the novice may not recognize a problem even after it has happened.  We need to train our clinicians in how to recognize problems and to challenge things that do not seem right. The workplace should be designed to prevent distractions and cognitive overload which all contribute to failures.

While clinicians will be key players in recognizing bugs, there is also a role for software in detecting bugs. Intelligent programs should be developed to monitor  processes and alert users when something unexpected happens.  With proper design, manufacturing, testing and monitoring clinical algorithms can be safe.