At Moichor, we are often asked about our automation — how we use it, what it does, when we choose to use it, and when we don’t. Automation is essential for providing accurate, inexpensive veterinary diagnostic testing with 24-hour turnaround at scale. A key innovation at Moichor is improving what’s possible with automated CBCs by using microscopy and computer vision. For this R&D chat, we sat down with Matthew Guay, Kyle Web, and Thanh Le to explain how we arrive at automation for a CBC test.
A successful automated CBC offering is a synthesis of multiple processes working in concert — human, hardware, and software. This process begins as a human would, with a blood smear under a microscope. Our automated microscopes are programmed to scan slides to identify the smear monolayer, where they capture images of 300-500 WBCs per slide.
Once a sample is ready for automated processing, we count white blood cells using cell identification algorithms trained on Moichor’s CBC image data. At the time of publication, the Moichor CBC database contains over half a million pathology objects from over 150,000 images representing over 300 species.
This approach lets us analyze more images, far more quickly than manual review by a pathologist. This enables much more accurate and reproducible estimates of cell counts, as covered in our ExoticsCon 2021 presentation.
In AI research, it’s easy to take a curated data set, create a model, and train it to get impressive results. But moving this exercise from an isolated data set to a live stream of new real-world data is where the challenge begins.
In order to be viable for clinical operations, where results may be critical to patient healthcare, any Moichor CBC AI must work robustly across the variability presented by different species, different pathologies, and differences in sample preparation and handling.
How can we create an algorithm that does this? How do we know whether an automated test is doing a good job? What makes a machine learning algorithm trustworthy? These are the questions we have asked ourselves, and these are the questions that vets ask us. Our answers rest on three pillars: accuracy, robustness, and comprehensive validation.
What does it mean to say that an algorithm is accurate? For CBCs, this is a seemingly simple question with a nuanced answer. Imagine for example, there are some cells in an image and each one of those cells is a certain type. In this example, perfect accuracy is identifying them all correctly.
As it turns out, when you process thousands of samples, things work a bit differently. Variation in sample preparation and handling can lead to ambiguity that makes it difficult to differentiate between some cells. Inherently, this is a level of disagreement even between trained experts. By consulting with standards bodies for human pathology, and working with veterinarians to identify clinical requirements, veterinary experts have determined safe error rates that provide a frame of reference for us to measure our own algorithms’ performance.
To evaluate an algorithm correctly, we need correct data to compare against, so the last challenge for accuracy is building a gold-standard collection of human-annotated samples for validation. There are no shortcuts here, just a pathologist-led team taking the extra time and care to create a data set with an error rate as close to 0% as humanly possible.
If every single blood smear were perfectly prepared and handled and transported under ideal conditions, cell identification would be a simple matter. But for effective automation, our methodologies have to be robust enough to be accurate despite varying quality and conditions of samples, across a range of pathological states. Achieving robustness for CBC automation requires input standardization and diverse data.
At Moichor, we achieve standardization through a combination of strictly-controlled sample preparation protocol, automated microscopy, and software-based methodologies. Additionally, our data is really diverse; representing the full spectrum of species and sample conditions that may be presented for CBC evaluation. Over time, the goal is to continue building on this robustness to implement increasing levels of automation in the process, while creating an expansion upon our gold standard for accuracy.
With informed techniques for measuring accuracy and a diverse collection of data, we can comprehensively verify the evolving performance of our CBC platform. By stratifying data into different categories based on sample properties, our validation process can assess whether a model will be able to meet required standards of accuracy for automation.
Above all else, an AI tool must be trustworthy. Within our reference lab, the feedback from our laboratory staff and pathologists has been and continues to be essential to developing trustworthy clinical AI, giving us a holistic view on issues that may elude quantitative assessment of algorithmic accuracy alone.
Diagnostic test automation is seen as an AI algorithm achievement, but reliable clinical automation is actually the result of a well-honed system of algorithms, hardware, and human beings working in concert.
At Moichor, we work to optimize this system in all its aspects. The result is a technological and clinical achievement we are proud to share with the veterinary community.