How to dev your way out of a problem: Building the ANS Smart Parking Application in Azure


IoT, AI, Cloud Native Application Development, smart parking

If you were lucky enough to join us at Microsoft Decoded, this title might sound familiar as we delivered this talk to a packed room of eager Azure users. For those of you that didn't manage to get a seat, or missed the event, here's a recap.

A large part of our role in the Office of the CTO at ANS is to investigate and research the latest technology trends and look at ways ANS can provide a service to our customers using them. A couple of months ago our CTO asked us to put together a demo that would showcase how the IoT and Artificial Intelligence services available in Azure could be used within a practical application.

This got us thinking about challenges that affect us in our everyday lives. One of the biggest challenges employees at ANS face is office parking. We have over 200 employees based in our Manchester office but only around 50 parking spaces. We were sure this was a problem we could alleviate using technology, and we wanted to prove that these services can solve real world problems.

The first step: track the parking spaces. There are several ways of doing this, each with their own advantages. The most common would have been to use parking sensors that are fixed to the parking spaces and use magnetic, ultrasonic or infra-red technology to detect if a space is occupied. Alternatively, you could locate a sensor on the entrance/exit that counts numbers of vehicles based on how many have come in and out of the car park. These are both good solutions but neither would meet our needs of tracking individual spaces whilst remaining cost effective.

With that in mind we decided to use two Cisco IP remote cameras mounted to the side of our office building. The two cameras gave us a complete view of the car park and had infrared illuminators so we could still monitor spaces in the dark. They came in at a fraction of the price of 50 individual parking sensors too. 

Once we started to capture images of the car park, we had to determine what we were going to do with them. Using AI it is possible to do what is called “classification” and let an AI model determine whether or not the parking spaces were vacant or occupied. We built our own machine learning model using the programming language Python and the opensource machine learning framework, TensorFlow.

Once the model was created we trained it to determine the difference between a vacant and occupied car parking space. This required a lot of compute power so the perfect place to do this was using powerful virtual machines hosted in Azure. Over 160,000 different images of car parking spaces were used to train the model so that it could eventually give an accurate prediction whether or not the space was vacant or occupied. The images used showed cars in different weather conditions and lighting conditions as well as images where the view of the parked car was obstructed by things such as trees or even people.

This solution worked but it came with the cost and overhead of managing virtual machines to re-train the machine learning model to improve accuracy. This is where the Azure Artificial Intelligence and Machine learning services came into their own and we started looking at the Azure Computer Vision API. Azure Computer Vision API provides developers with access to advanced algorithms for training custom machine learning models by uploading multiple images all through an easy to use interface. 

We found that using the Computer Vision API we were able to train the model with just 600 images as opposed to the 160,000 images used on the VM hosted TensorFlow model. This was partly because the Computer Vision API has already been extensively trained using other images, with the TensorFlow model we had to train the model from scratch. Using Computer Vision was much quicker, required less resource and time and there was no need to keep underlying Virtual Machines maintained. With the model trained were able to project onto the car park image, an indication of whether the space was in use or not along with an accuracy percentage between 0% and 100%. 

 

parking 0

We decided to use Azure IoT services to run the machine learning model. We deployed Azure IoT Edge on a gateway device that the cameras were connecting to. IoT Edge was able to capture and process the images on the local device, saving us the bandwidth costs associated with sending images over the internet into Azure.

IoT edge runs the machine learning model locally and sends a message to Azure IoT Hub as to whether the spaces are vacant or occupied. IoT Hub is an Azure managed service, hosted in the cloud that acts as a central message hub for bi-directional communication between IoT application and the devices it manages. In this case we used it to receive inbound messages from the gateway which would then route into an Azure SQL database. Using this model there was no need to retain any of the images as they are processed and discarded afterwards. 

We then took the current and historical information provided by the machine learning model and stored in the SQL database, and used that to build Power BI dashboards and a mobile application that would allow ANS employees to see how many space were available at any time. We could also perform analysis on when the car park was at its busiest and the best times to arrive if you wanted to get a space.

 

smart parking 2

 

We’re pretty pleased with the first version but in future releases we'll be looking at how we can integrate with other data sources so we can track when staff may be on annual leave or out of the office on meetings, and as a result more spaces are available in the car park. This data can then be used to train more machine learning models to predict future availability for the car park.

What started out as a request for a demonstration actually resulted in solving a genuine problem for ANS employees, and when we recently demonstrated our solution at one of our all staff meetings it was incredibly well received.

We can’t wait to officially launch the app and see it in full swing. This is definitely the beginning of something really exciting, both for our employees and visiting customers. Watch this (parking) space!

 

*Please note, unlike standard CCTV, there is no need to record or retain images.*

 

Posted by Liam Hales