Computer vision AI is transforming manufacturing quality control by enabling fast, accurate, and scalable inspection workflows. In this Builder's Corner tutorial, you'll learn how to design, build, and deploy an autonomous quality inspection system using open-source tools and Python code.
As we covered in our complete guide to AI automation in manufacturing, computer vision is a critical driver of efficiency and ROI. Here, we’ll go hands-on with a practical workflow you can adapt to your own production line or lab.
Whether you're a developer, ML engineer, or technical manager, this guide will help you set up a reproducible, testable pipeline for defect detection using deep learning. We’ll cover data collection, model training, deployment, and automation—plus troubleshooting and next steps.
Prerequisites
- Hardware: A workstation or server with an NVIDIA GPU (recommended for training), or Google Colab for cloud-based prototyping.
- Operating System: Linux (Ubuntu 20.04+), macOS, or Windows 10+
- Python: Version 3.8 or newer
- Git: Version 2.20+
- Docker: (optional, for deployment) Version 20.10+
- Knowledge:
- Basic Python programming
- Familiarity with machine learning concepts
- Understanding of manufacturing quality inspection goals
- Libraries:
- PyTorch 2.x
- OpenCV 4.x
- Ultralytics YOLOv8 (object detection, segmentation)
- Streamlit (for dashboard and visualization)
- Optional Reading:
1. Define Your Quality Inspection Use Case
-
Identify Defects and Pass Criteria
Start by specifying what constitutes a defect in your product. For example, you might want to detect scratches, dents, or missing components on an assembly line.- List common defect types.
- Define a "pass" vs. "fail" product.
-
Choose Inspection Points
Determine where in the process you’ll capture images (e.g., after assembly, before packaging).
2. Collect and Label Image Data
-
Capture Images
Use industrial cameras, webcams, or smartphone cameras to gather images of both “good” and “defective” products. Aim for at least 500 images per class for basic prototyping.
Store images in folders:dataset/ ├── images/ │ ├── good/ │ └── defect/ └── labels/ (for YOLO format, see below) -
Label Images
Use tools like labelImg or Label Studio to annotate bounding boxes or segmentation masks for defects.- Export labels in YOLO format for compatibility with Ultralytics YOLOv8.
0 0.51 0.63 0.12 0.20 1 0.40 0.22 0.14 0.18Screenshot Description: Example of labelImg interface with a bounding box drawn around a surface scratch on a metal part.
3. Set Up Your Development Environment
-
Clone the YOLOv8 Repository
git clone https://github.com/ultralytics/ultralytics.git cd ultralytics -
Create and Activate a Python Virtual Environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install Required Libraries
pip install -r requirements.txt pip install opencv-python streamlit -
Verify Installation
python -c "import torch; print(torch.__version__)" python -c "import cv2; print(cv2.__version__)"Screenshot Description: Terminal output showing successful import and version printout for PyTorch and OpenCV.
4. Train a Computer Vision Model for Defect Detection
-
Prepare Dataset Configuration
Create a YAML file (e.g.,data.yaml) describing your dataset:train: ./dataset/images/train val: ./dataset/images/val nc: 2 # number of classes (e.g., good, defect) names: ['good', 'defect'] -
Train YOLOv8 Model
Run the following command to start training:yolo detect train data=data.yaml model=yolov8n.pt epochs=50 imgsz=640 batch=16yolov8n.ptis a lightweight starting model; for higher accuracy, tryyolov8m.ptoryolov8l.ptif your hardware allows.
Screenshot Description: Training progress output in the terminal, showing loss, mAP (mean average precision), and accuracy metrics.
-
Evaluate Model Performance
After training, review the generated results in theruns/detect/train/folder:- Check
results.pngfor loss curves and mAP over epochs. - Inspect
confusion_matrix.pngfor class accuracy.
- Check
5. Deploy the Inspection Model for Real-Time Inference
-
Test Model on New Images
yolo detect predict model=runs/detect/train/weights/best.pt source=./dataset/images/test- Results will be saved in
runs/detect/predict/with annotated images.
Screenshot Description: Output image with bounding boxes around detected defects, displayed in a file viewer.
- Results will be saved in
-
Build a Simple Inspection Dashboard with Streamlit
Create a file namedapp.py:import streamlit as st from ultralytics import YOLO import cv2 from PIL import Image import numpy as np st.title("Autonomous Quality Inspection Demo") uploaded_file = st.file_uploader("Upload product image", type=["jpg", "png", "jpeg"]) if uploaded_file is not None: img = Image.open(uploaded_file) img_np = np.array(img) model = YOLO("runs/detect/train/weights/best.pt") results = model(img_np) annotated = results[0].plot() st.image(annotated, caption="Inspection Result", use_column_width=True)streamlit run app.pyScreenshot Description: Streamlit app interface with an uploaded product image and visual defect detection overlay.
-
Optional: Containerize with Docker for Production
FROM python:3.10-slim WORKDIR /app COPY . . RUN pip install -r requirements.txt EXPOSE 8501 CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]- Build and run:
docker build -t quality-inspection-app . docker run -p 8501:8501 quality-inspection-app
- Build and run:
6. Automate the Inspection Workflow
-
Integrate with Production Systems
Use Python scripts or APIs to connect your inspection model to conveyor belts, cameras, or PLCs (Programmable Logic Controllers).import cv2 from ultralytics import YOLO model = YOLO("runs/detect/train/weights/best.pt") cap = cv2.VideoCapture(0) # Use your industrial camera's source while True: ret, frame = cap.read() if not ret: break results = model(frame) annotated = results[0].plot() cv2.imshow("Inspection", annotated) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()Screenshot Description: Live video feed with real-time defect detection overlay.
-
Trigger Actions Based on Inspection Results
For example, send a signal to reject a defective part:if len(results[0].boxes) > 0: # Defect detected # Send signal to actuator or PLC print("Defective part detected! Trigger rejection mechanism.")Note: For more advanced workflow orchestration, see How to Build a Custom AI Workflow with Prefect.
Common Issues & Troubleshooting
-
Low Detection Accuracy:
- Check for class imbalance or poor-quality labels.
- Increase dataset size, especially for rare defect types.
- Try a larger YOLO model variant (e.g.,
yolov8m.pt).
-
Model Misses Small Defects:
- Increase image resolution (
imgszparameter during training). - Annotate small defects carefully with precise bounding boxes.
- Increase image resolution (
-
GPU Memory Errors:
- Lower batch size during training (e.g.,
batch=8). - Use a smaller model variant (e.g.,
yolov8n.pt).
- Lower batch size during training (e.g.,
-
Streamlit App Not Displaying Images:
- Check file types and ensure
Pillowis installed. - Review console for error messages.
- Check file types and ensure
-
Real-Time Inference is Slow:
- Use a GPU for inference.
- Optimize model with
torchscriptorONNXexport (see YOLOv8 docs).
Next Steps
- Scale Up: Deploy your workflow on the factory floor with edge devices or cloud inference endpoints.
- Integrate with MES/ERP: Send inspection results to your Manufacturing Execution System or ERP for traceability.
- Expand to Multimodal AI: Combine vision with sensor or text data for richer insights (see our multimodal AI workflow guide).
- Automate Testing: Implement continuous testing and monitoring (see automated testing best practices).
- Explore Predictive Maintenance: Pair inspection with predictive maintenance AI workflows for end-to-end process optimization.
By following these steps, you can build a robust, autonomous quality inspection system with computer vision AI. For a broader look at how these workflows fit into digital manufacturing, see our parent guide to AI automation in manufacturing.
