Why the conventional deep learning model is broken

The conventional deep learning model is a supervised model. It takes months of time to develop and train the model before it is ready for the production line. Here, Karina Odinaev, co-founder and CEO of Cortica and co-founder of artificial intelligence start-up Lean AI, explains why the conventional deep learning model is broken and what the alternatives are.

 

The conventional deep learning model is supervised. The model must be shown hundreds or thousands of pre-tagged defect images, teaching it how to determine what constitutes a defect. The process requires significant human involvement, both from a quality manager who will have to tag the defects and the AI expert to tune the architecture and hyper parameters.

 

This journey is not easy and can take months. The process takes thousands of images and a lot of time – typically two months for each camera and for each product type, although this can vary significantly depending on the task. You might hear bold marketing claims about the need for fewer and fewer images, but you will often find in practice that the model does not work as intended and more images and feedback are required. In many instances, the quality manager will have to create manually by force production-like defects for training purposes. Given that these artificial defects do not necessarily represent real-world defects, it is no surprise that this approach can often lead to problems further along.

 

After weeks or even months of work training the model with pre-tagged data sets, the outcome is still uncertain. The system is like a black box, because when it fails you are unable to see why. Another common challenge in production is process variation. The model is required to adapt to the changes, so without this capacity for online learning you will soon encounter degradation of performance.

 

Fully unsupervised models

The opposite of a supervised model is a fully unsupervised model. Some systems rely on part statistics to understand what is okay and what constitutes a defect. There are many challenges with such an approach, including artifacts in the production that are not defects, differing sensitivities to defects in different areas and the fact that defect definition is dynamic.

 

Ideally, a model design for defect detection should represent the knowledge and understanding of the quality manager. They know their product better than anyone else and their input and feedback can mitigate many of the problems described above. The optimal solution is therefore a model that is closer to the unsupervised end of the spectrum, but without the drawbacks of the fully unsupervised system.

 

Our unsupervised system is designed with this goal in mind. Rather than having to tag lots of data yourself, you can simply feed the model untagged data and it learns for itself, unsupervised, what a defective product looks like. There is no getting away from the reality of feeding it a lot of images, but this process is automated and therefore quicker and easier.

 

An unsupervised model can automate the process of building the model because its algorithms allow it to stream untagged images and work out for itself what possible defects look like. However, once it identifies outliers or potential defects, you need someone with knowledge of the product to provide that feedback and allow the model to continually optimize. With this approach you leverage the knowledge of the quality manager, but you don’t wear them out by requiring labelling thousands of images.

 

How long does this process take? Here is the big return on investment. Compared to the conventional model which takes months to be ready to deploy to the production line, the unsupervised model can deliver a workable solution in few weeks or less. The model itself can do the learning on the production line, saving you time and hassle. And with the input of the quality manager, you enjoy the benefits of automation without the problems encountered with fully unsupervised systems which has so far failed to deliver a workable solution.

 

The best of both worlds is an AI solution that allows the quality manager to retain control over what the AI system learns, but avoids the hassle of having to waste months of work tagging. Our unsupervised system is designed to deliver this vision, leveraging and integrating the quality control knowledge you have, but automating the tedious work that is required for the supervised model.

 

Lean AI uses Cortica’s patented machine-learning algorithms to deliver visual inspection software for the toughest use cases in industry. To find out more, visit lean-ai-tech.com

 

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

T.J. Davies' Retention Knobs

T.J. Davies' Retention Knobs

Our retention knobs are manufactured above international standards or to machine builder specifications. Retention knobs are manufactured utilizing AMS-6274/AISI-8620 alloy steel drawn in the United States. Threads are single-pointed on our lathes while manufacturing all other retention knob features to ensure high concentricity. Our process ensures that our threads are balanced (lead in/lead out at 180 degrees.) Each retention knob is carburized (hardened) to 58-62HRC, and case depth is .020-.030. Core hardness 40HRC. Each retention knob is coated utilizing a hot black oxide coating to military specifications. Our retention knobs are 100% covered in black oxide to prevent rust. All retention knob surfaces (not just mating surfaces) have a precision finish of 32 RMA micro or better: ISO grade 6N. Each retention knob is magnetic particle tested and tested at 2.5 times the pulling force of the drawbar. Certifications are maintained for each step in the manufacturing process for traceability.