Before we begin to build the model, we need to gather some images to load into Azure Custom Vision. As my model is going to be based on a smart parking application, I’m going to use 50 images of vacant parking spaces and 50 images of occupied parking spaces. However, feel free to use whatever you like as long as there are two classes. So for instance you could use cats and dogs or apples and oranges, and as this is for testing purposes you can probably use as few as 10 images of each class.
Make sure your images are either named appropriately or put each class into a separate folder as this will make it much easier when we upload them to the Custom Vision service.
Open an internet browser and go to http://customvision.ai and click 'SIGN IN'.
If you already have an Office 365 or Microsoft Live account enter your details and login, else you will need to sign up for a Microsoft Live account.
After logging in you will be presented with the Projects page, click on ‘NEW PROJECT’
The create new project modal will open, enter a name and description, then set ‘project type’ to ‘classification’, set ‘classification types’ to ‘multiclass’ and set ‘domains’ to ‘general (compact)’ then click ‘create project’ button. General (Compact) allows you to export the model to use with Azure IoT Edge and other services if required.
Now that your project has been created, you can start uploading images by clicking on ‘add images’ just below the navigation bar. A file browser will then open, select all the images that will be used for you first class, in my case, I’m uploading all the images that contain occupied parking spaces.
An ‘image upload’ modal will open. At the bottom, enter the name of the first class in the ‘my tags’ files, in my case it’s ‘occupied’, then click the ‘upload xx files’ button and wait for the files to upload.
Once uploaded, select the ‘done’ button.
Now you have uploaded your first class we need to repeat the steps again to upload your second class. Once you’ve uploaded your images for both classes, it’s time to train the model. To do this, click on the green ‘train’ button in the navigation bar. You will then be taken to the performance page.
After a minute or two the new model will complete the training and the performance page will automatically populate with two graphs showing the precision and recall of the model. Ideally you want the precision and recall values to be in the high 90’s as this will indicate you have an accurate model.
Now you have a trained model, it’s time to test it. This can be done directly in the Custom Vision interface by clicking on the ‘quick test’ button on the navigation bar. Select this and then click on the “browse local files” button, select a file and upload. The modal will then update and show your predictions. You should now see a tag for each class you created, and the probability of the class being matched as a percentage.
Once you have tested a few images and you are happy with the results, click on the “X” in the top right to close the modal.
Now before we move onto the predictions page, lets look at exporting the model for later use. To export the model, click on the ‘export’ link just below the navigation bar. The ‘choose your platform’ modal will open, as you can see there are several options, select the ‘dockerfile’ format as this will be used in a later blog post, when I demonstrate how to use the Custom Vision modal on Azure IoT Edge.
After clicking on the ‘dockerfile’ button you will be given the option choose a format. Select “Linux” and click on the export button.
The last page to look at in the Custom Vision portal is ‘predictions’. to view this page, click on ‘predictions’ on the navigation bar. Once on this page, you can view the previous predictions the modal has made, by the iteration (version) of the model. You will also have the ability to filter and sort the displayed predictions.
And there you have it, you’ve just built an Azure Custom Vision image classification model (hopefully)! If this isn't completely clear, I've also recorded a couple of step-by-step tutorial videos which you can find on youtube, by following these links to video 1 and video 2.
In my next blog, I’ll show you how to consume your new Custom Vision model using Postman ReST client.
Posted by Nathan Gaskill