165011-Thumbnail Image.png
Description

Recent advancements in machine learning methods have allowed companies to develop advanced computer vision aided production lines that take advantage of the raw and labeled data captured by high-definition cameras mounted at vantage points in their factory floor. We experiment

Recent advancements in machine learning methods have allowed companies to develop advanced computer vision aided production lines that take advantage of the raw and labeled data captured by high-definition cameras mounted at vantage points in their factory floor. We experiment with two different methods of developing one such system to automatically track key components on a production line. By tracking the state of these key components using object detection we can accurately determine and report production line metrics like part arrival and start/stop times for key factory processes. We began by collecting and labeling raw image data from the cameras overlooking the factory floor. Using that data we trained two dedicated object detection models. Our training utilized transfer learning to start from a Faster R-CNN ResNet model trained on Microsoft’s COCO dataset. The first model we developed is a binary classifier that detects the state of a single object while the second model is a multiclass classifier that detects the state of two distinct objects on the factory floor. Both models achieved over 95% classification and localization accuracy on our test datasets. Having two additional classes did not affect the classification or localization accuracy of the multiclass model compared to the binary model.

Reuse Permissions
  • 337.5 KB application/pdf

    Download restricted. Please sign in.
    Restrictions Statement

    Barrett Honors College theses and creative projects are restricted to ASU community members.

    Download count: 1

    Details

    Title
    • Automated Tracking of a Production Line Through Computer Vision Analytics
    Contributors
    Date Created
    2022-05
    Resource Type
  • Text
  • Machine-readable links