Why training with few good images is fake news
Machine vision for parts inspection is a technology beset with misinformation. As recently as five years ago, the only way to teach a machine vision system what a good product looked like was to laboriously show it every possible fault, mark and annotate the errors and catalogue them. This process was normally done during the commissioning stage by the vision system integrator and kept up to date by the quality assurance manager — or sometimes not kept up to date at all. Here, Miron Shtiglitz, director of product management at QualiSense, explains why one of the key datasets suppliers communicate as a customer benefit is nothing but fake news.
Circa 2017, deep learning, autonomous machine vision and artificial intelligence became the buzzwords of the industry. Suddenly, several vendors started promoting the marketing message that there was no real need to go through the process of teaching a system what bad looked like. Instead, the new way was to show it what good looked like and the AI after learning the notion of a good part will be able to detected a defective part (an outlier).
Vendors began to trumpet messages such as, ‘set up with just 50 good images’, ‘get started with just 30 good images’ and ‘inspect with only 20 good images’. A race to the bottom began, with everyone pushing each other to get to a point where you need the smallest possible number, as if it delivered actual customer impact.
Fake news
Here’s the truth though.Unsupervised deep learning such as this, where the model learns only from OK samples, struggles to deliver the desired results. In most cases 20, 50 or even 100 OK samples can’t really represent all the diversity of a product, unless the product is very simple, or the inspection task is trivial. Production is ever changing and a part produced today will look different from the same part produced a week or a day ago.
Today’s supervised deep learning systems require hundreds or thousands of annotated images to, after a long process, deliver an adequate solution. How is it expected that from 20 or 50 OK samples the system will actually learn? There is no magic.
Instead, what ends up happening is that false positive rate is very high and the user ends up adding more and more samples to the OK pile, to then at some point end up with missed defects. Furthermore, OK samples are not in shortage so it’s usually irrelevant if you need 20 samples or 500 samples.
Quality conscience
I believe very strongly in the importance of the quality conscience working alongside a robust Quality Management System (QMS). Kate Smith, the managing director of QA consultancy Capella Associates explained this perfectly for the CQI/IRCA blog recently.
She said, “We can ask ourselves whether we’d be proud to tell family and friends about the things we’ve done, or whether we’d be happy to be on the receiving end of the action. Or we could take a more-structured approach and do an impact assessment, with questions such as “does the outcome have a positive impact on all stakeholders?”
I think the idea of pride in your work is critical to this. Imagine trying to explain to an outsider that you’d chosen a machine vision system based on how many good parts it needed to train, despite knowing that the number is essentially fake news!
At the end you are left with a pile of falsely detected defects that the systems wrongly detected and have to review hundreds of images to sort the good from the bad and update the system again and again. This is sorting and tagging through the back door.
If it actually worked for you, it means that you have applied AI to a very simple use case.
The alternative
QualiSense’s Augmented AI Platform only requires access to unlabelled production data and images, which are both available in abundance and require no manual tagging.
It uses its AI engines to process the data and to automatically sort the data. QualiSense uses smart user feedback to sharpen its tools and constantly improve the result. When data is initially sorted it then goes into an auto-labelling engine that can in most case properly annotate a defect, and here again we utilize smart feedback to tweak the system
Furthermore, QualiseSense can adapt the model to any process or environmental change in real time, meaning that you don’t have to worry about your system not keeping up to date.
Along with other advantages, such as minimal data handling simple integration, quick installation and commissioning, QualiSense benefits from being software only — able to run on virtually any machine vision system via it’s QualiSense API.
And yes, that includes the ability to retrofit a failed installation sitting idle on your production line, having not worked since it was commissioned. The global market for machine vision is estimated at around $15 billion, with each traditional install costing around $150,000. If those numbers are accurate, there are around 100,000 points of vision being installed every year.
I’ve spoken to hundreds of quality assurance managers around the world, in companies ranging from global brands to SME manufacturers, and they all agree that a significant percentage of installations just don’t work.
If the number of failed installs is as low as ten per cent, that could be 10,000 new points of vision installed every year around the world that are simply not functional. That tells us that over the lifetime of the machine vision market there are, conservatively, hundreds of thousands of hardware systems spread around the world, waiting to be brought to life.
Imagine the impact on the global economy, and on the output of your own plant, if that were to happen. Making that happen seems to me to be much more important than continually spinning fake news about the number of good images your system needs.
Comments (0)
This post does not have any comments. Be the first to leave a comment below.