Eyes on your trays đź‘€

Danial
9 min readNov 24, 2020
Overview of eyes on your tray IoT project

This semester, I was enrolled in CS462 Internet of Things: Technology and Application at Singapore Management University (SMU). It is considered an “SMU-X” the “X” in the module stands for “EXTREME”. Just kidding, I’m not too sure what the “X” stands for but it is modules that allow students to work in collaboration with an external sponsor. Usually, these courses tend to signify a heavy workload due to the fact that we would be representing our school so the work can’t be half-baked. How these modules are usually run is external stakeholders have a problem that they are facing and engage students to come up with innovative and fresh ideas to solve these problems.

MSE & NEA logo

This semester, the sponsors were the National Environment Agency (NEA) and Ministry of Sustainability and the Environment (MSE). The problem that these 2 agencies faced were that they did not have an easy way to measure the success of their tray return campaigns.

Problem Statement:

With the move to instill a “keep it clean” culture, NEA launched the SG Clean campaign to improve the cleanliness of public spaces and encourage good personal habits. One such habit that they are advocating is for patrons to return their trays on their own. NEA and MSE have conducted a handful of campaigns to encourage patrons to return their own trays, however, there has never been an easy way to measure the effectiveness of such programs.

The current status quo relies on having someone stationed at one section of the hawker center and manually counting the number of people who return the trays vs those who leave it. This is an extremely manual task and prone to human error, using technology allows the person to be able to do something more useful with his time. The limitation with this method also means that only a section of the hawker centre will be surveyed as well as the short time frame the person conducts the survey might lead to inaccurate data. Currently, they are only able to gather a representative sample of the population of the patron’s behaviour with this method. They are usually only able to observe 20~30 patrons within the short time they are there and the segment they are assigned to.

There are multiple stakeholders with this movement.

  1. Cleaners — Reducing the number of tables they have to clean will help reduce the workload of the usually elderly cleaners to allow them to focus on other tasks.
  2. Patrons — The group we are trying to encourage to clean after themselves.
  3. NEA & MSE — Interested in finding out the success of their campaigns.
  4. Hawker centre managers — Interested to find out how to improve the tray return rate.

Our proposed IoT solution tries to be as non-intrusive for the patrons as possible, as well as the cleaners to allow them to continue with their current workflow. It allows NEA & MSE an efficient and more thorough way of obtaining the tray return rate data to measure the success of their campaigns. The hawker centre managers gain to gather better insights on the tray return rates to allow them to make better decisions.

IoT Solution Concept & Implementation:

Solution overview

The solution we propose is to make use of machine learning and cameras connected to Raspberry Pis. The cameras are deployed in a few key locations such as the store, tables, and tray return station. Using machine learning we are able to train a model to detect the tables, table occupancy, trays, and QR codes. The machine learning model will then send the data to the cloud which we can then use to gather insights on the patron behaviour and measure the effectiveness of the tray return campaigns.

Structure placed at hawker store to detect trays
Camera placed above the tray return station
Raspberry Pi and camera placed above the table

This allows us to gather information such as when the tray leaves the store, when it returns to the store or when it is returned to the tray return station. It allows us to measure a few key statistics such as the rate the tray is returned to a tray return station and the duration of the tray’s journey back to the store. The model also enables us to measure the occupancy of the table and how likely certain groups are to return their trays. Giving us insights on the tray return behaviour of patrons i.e which group size is most likely to return their trays.

Currently, there are not many solutions that collect data of the tray return rate but more of getting people to return their trays such as incentives and robots. None of which are able to measure the effectiveness of the tray return campaigns.

The use of other types of sensors such as RFID have been implemented previously, but to not much success. One limitation is that the sensors attached to the trays might get damaged during the intense washing process. According to the agencies, machine learning is an idea that was played around with but never tested. Setting up the cameras and a machine learning infrastructure opens up doors to collect different data points in the future to better understand the patron decisions which can help make key decisions.

The main page of the dashboard
Tray return rate over time

With the data collected, we are able to generate graphs to better gain insights from the data we have collected. We are able to obtain the tray return rate overtime which shows us what time most people return their trays.

Tray return statistics

We are able to obtain key insights such as the average tray return rate which measures the amount of trays being returned to the tray return station over the total number of trays used. We have gathered that only 8% of patrons who took the trays return it to the tray return station. We also obtained the ratio of people returning trays using the cameras on the table which is 8 out of the 49 people that we observed. From there we are able to deduce the group size which is most likely to return their trays which is usually the size of 3~4 people.

The store page in the dashboard
Enhanced store view

The data collected from the tray return station and the store allows us to gather real-time data of the number of trays in use and those that have returned. Over here we currently have 37 trays currently in use and 27 trays that were used and returned to the store meaning 10 trays are currently not at the tray return station or the store. The average time for the trays to return is approximately 40 minutes with a tray return rate of 8%. We are also able to obtain data such as when people decide to take a tray which is usually by the lunch crowd.

Table page in the dashboard

From the Raspberry Pi deployed at the table we are able to detect the number of patrons, the number of trays and the average time the patrons spend occupying the seat which is approximately 24 mins. From this Pi we are able to gain insights on the number of patrons who clean up after themselves and the duration the tables are left dirty if the patrons did not return their trays.

Tray return station page in the dashboard

With the raspberry pi attached above the tray return station, we are able to analyse the number of trays returned at the particular station and the average time the tray remains at the station. It is also able to pick up when the trays are returned to the station and display it as an aggregate.

Insights:

As we were able to deploy the project for close to a full day. The data collected was sufficient to come up with some interesting insights for the sponsors and sufficiently solve the problem statement of not having an easy way to calculate the tray return rates.

Some key findings:

  1. The average tray return rate is 8%.
  2. Group size of 3 or 4 was more likely to return their trays.
  3. The distance between the tray return station and the table did not have much effect on people returning trays.
  4. Tables are usually cleaned within 1~2 mins.

Sponsors’ feedback was fantastic, with them being really interested in the project and our key findings. The sponsors had thought of using machine learning however did not have the time and resources to conduct an experiment. The experiment was a successful proof of concept that machine learning and cameras could be used to analyse the tray return rates. We shared with them our challenges and potential limitations as well as how to better scale the solution if it were to go into production. We shared how the lack of data did not allow for a robust machine learning model and that the solution can be further improved with the ability to train a robust model.

Challenges:

Now, I would be lying if I said it was smooth sailing throughout the project. As all of us only had experience building software, this was the first time we had to design something that included hardware. The differences in how you plan and execute the different types of projects were something we anticipated but were not completely prepared for.

One of the challenges we faced was the fact that we did not have the ability to completely train a robust machine learning model. As we were only able to take a few hundred pictures of the table and trays, the model was able to work but there were still some kinks that could only be solved with more data to feed it.

Since we were using cameras, physical objects were more likely to disrupt the camera’s sense making abilities. Light played a big factor in the camera’s ability to detect the trays, tables, and chairs. A large shadow is able to cover the entire table and chairs causing the model to give a false positive at times. But thankfully it was only during the start of the deployment that we faced such issues, luckily the sun and light were on our side for the rest of the day. Likewise for the system to be more robust without to prevent light from skewing the readings. More data has to be fed to the machine learning model to understand that shadows or light is not necessarily a bad thing.

Lesson Learnt:

With this being the first-ever hardware project that we have taken, some key takeaways that we got were that physical environment factors are likely able to affect the hardware systems and sensors. These factors might not necessarily be apparent during the ideation phase which is why it is good to have a testing phase to try out the system in the exact environment it will be placed in, which is what we did a day before our deployment.

WiFi connection speeds and range differ greatly in the live environment and our test environment (aka Home). When WiFi is involved, always check if it’s able to meet your project requirements by testing it a few days earlier.

Lastly, the different considerations working on projects that involve hardware vs software such as environment, users, and the stakeholders. Stakeholder management is extremely important in hardware projects as approvals need to get obtained before any hardware can be deployed in the area. Compared to software projects, which can usually be tested locally and is more likely to have reproducible results in production.

As such the 3 months were a tough and rewarding experience where we students were able to go through the full process of ideation to execution of the project. It was truly an unforgettable experience.

--

--