The goal is to have a demonstrator model by mid-January 2020, but we have some work to do. What must the robot meet? He must be able to drive around safely, recognize and pick up waste and we also want interaction with people who can help the robot.
The robot is equipped with cameras with which it can look around. While driving, he will 'see' garbage, because he can recognize this on the basis of a trained algorithm. We do this by providing the algorithm with images of, for example, cigarette filters.
We use a Convolutional Neural Network (CNN), a self-learning algorithm that can make connections. To feed the algorithm it is very important that we have good and large amounts of training data. The training data must be diverse, so that the robot can see the dirt in all light conditions and all shapes and sizes.
We anticipate that the robot will soon be able to estimate for itself how much certainty it thinks it knows that what lies ahead is what it is. The probabilistic algorithm indicates the probability. For example, if the robot soon expects to see that there is a can in front of it with 80% certainty, we program it so that it will take it with it.
As long as he thinks he knows for example 60% for certain and 40% that it is something else, let's leave it for a while, the robot will take a photo with a GPS tag. At that moment we are going to engage people, the public, who have to advise the robot online. We ask them "what is this?" And then have them annotate the photo. This information is sent back to the robot.
In addition to cleaning up waste, the added value of the robot is therefore primarily also in the collection of data. By giving the robot a function in the public space and allowing it to interact with people, the robot will also contribute to social discussion and awareness about litter.