·
2 min read

Smart solutions 3: Custom Vision

In a previous post we looked at how to use a general service , Computer Vision, to recognize content and faces in images. There is another model which you train yourself, called Custom Vision.

Scenario

Your company uses NFC chips  to detect movements of components between your warehouse and production floor. Why not, instead of having to attach NFC chips to everything, take a picture every time you use a component. The system will automatically recognize the component and log it. For this, you need image recognition that is trained in your  warehouse on your  components. Custom Vision is a service that lets you do exactly that.

To get Custom Vision

  1. Go to https://www.customvision.ai and create a new project (General).
  2. Upload training images.
    Attached is a set up sample images you can use, or of course you can bring your own. For each tag you need around 38 images. For each set of images you upload keep some back which you don’t upload, to use for verification. For example, don’t upload images wiht some specific background.
  3. Train model (click the green train-button).
  4. Click “Make Default”, and make sure that it tells you that it is “Already Default”.
  5. Go go “Prediction URL” and get the URL and Key here – make sure to take the values from “If you have an image file:”. Now you are ready to send images to your home-trained Microsoft Azure image recognition service.

In Dynamics NAV

If you haven’t already, read the post about Programming Computer Vision in C/AL code. Then modify the sample just to specify the URL and the key from “Custom Vision” instead of “Computer Vision”. And make sure to run analysis of type Tags. The NAV helper-objects support “Custom Vision” as well as “Cumputer Vision”, so nothing else needs to be changed. Just for convenience, here is a codeunit that sends an image from your disk to “Custom Vision”.

Everything is copied from the previous blog example (URL and Key are obscured):

OBJECT Codeunit 90910 IR Custom Vision
{
  OBJECT-PROPERTIES
  {
    Date=;
    Time=;
    Modified=Yes;
    Version List=;
  }
  PROPERTIES
  {
    OnRun=BEGIN
      ImageAnalysisManagement.SetUriAndKey(
        'https://southcentralus.api.cognitive.microsoft.com/' +
        'customvision/v1.0/Prediction/af94d33f-4e98-4240-8e17-9ac4383214da/' +
        'image?iterationId=ba004852-62bd-493a-9391-f2c3e8804b80',
        'd65cbf3ec8494353879f227fb9551841');

      ImageAnalysisManagement.SetImagePath('C:\MyPics\BlueBowl987.jpg');
      ImageAnalysisManagement.AnalyzeTags(ImageAnalysisResult);
      FOR i := 1 TO ImageAnalysisResult.TagCount DO BEGIN
        ResultString := ResultString +
          ImageAnalysisResult.TagName(i) + ' ' +
          ' -- ' + FORMAT(ImageAnalysisResult.TagConfidence(i)) + '\';
        END;
        MESSAGE(ResultString);
      END;
}
 CODE
 {
   VAR
     ImageAnalysisManagement@1001 : Codeunit 2020;
     ImageAnalysisResult@1000 : Codeunit 2021;
     ResultString@1003 : Text;
     i@1002 : Integer;

   BEGIN
   END.
 }
}

Over time you can add more pictures and re-train your model to make new iterations, and just set them to default and you don’t need to change anything on the calling code.

Attached sample images:
AI Sample Images

Happy Holidays, everyone!