By Elemental Machines
Since the very first line of code was written, there have been software bugs (one apocryphal story traces the origin of the term to 1946 when an actual moth was found trapped in the relays of a Harvard mainframe – and like all good code, was dutifully documented by being taped into a notebook). As a result, the development of debugging tools has closely mirrored the rise of modern software. From symbol tables and breakpoints to the sophisticated predictive code profilers of the 21st century, better debugging tools have enabled us to create sophisticated and smoothly functional software.
As both a scientist and software developer, I’ve always wondered: what if we were to take the same philosophical approach we use to debug software and apply it to “debugging” physical laboratory processes? Is conducting an experiment in the lab fundamentally that much different from writing software programs? At their core they both can be described as a series of steps executed in a carefully controlled environment resulting in a finite set of potential outcomes – so can the same principles of debugging be used to improve process in the lab?
Being able to understand and control for as many variables as possible is what leads to predictable, meaningful results and rapid, meaningful development (when you can isolate what’s going wrong you can correct or design around the problem rather than undergoing time-consuming trial and error). In software, this is accomplished using software tools that track and measure things like memory usage, variables and other parameters. In the lab, the analogous tools are the physical sensors that provide feedback on every variable that can affect your process.
We talk about debugging software as an indispensable part of software development. In that same vein, I believe that debugging should also be considered an integral part of physical experimental development. It’s time to bring debugging back full circle to its ancestral physical roots represented by that unfortunate moth in the machine.
So what does debugging the lab really mean in practice?
Fundamentally, debugging is a closed feedback loop – process -> measure -> modify. Most labs measure the direct variables involved in a particular process – whether it’s parameters like the conditions in a reaction vessel, or the temperature of a reagent that may have a direct effect on outcome. What is rarely incorporated are the indirect or environmental variables (e.g; ambient conditions, time of day, secondary techniques) – yet these potentially have a significant impact as well. Moreover, measuring and collecting alone isn’t enough – building meaningful insights requires actively correlating measured data with outcomes and understanding how variations in ALL these conditions contribute to the observed results. Once these insights are derived, an informed decision be made in terms of optimizing the next iteration of process, and the cycle is begins anew.
Building feedback loops that span the breadth of the research process is the first crucial step in creating a truly smart lab. A more recent technological innovation that promises to take us even closer is the increasing use of sophisticated machine learning algorithms and artificial intelligence (AI) techniques on large, heterogeneous sets of data, with the goal of creating truly actionable insights. Instead of just visualizing the data, algorithms can now pick up patterns and relationships that are not immediately visible to the user yet may have a profound impact on results. This is a truly exciting development, as it promises deeper insights into existing data sets as well as the ability to connect even more disparate sources of information in meaningful, actionable way.
As we move towards the reality of AI-driven process debugging, the need for increasing volumes of high quality measurements from every part of the process becomes ever clearer.