Initial Testing

Source: Deep Learning on Medium

We just completed the majority of the tests we specified in our last post. We collected images of ourselves walking towards a trash bin at 20 feet, 15 feet, 10 feet, 5 feet and 2 feet with 5 different types of waste. We then ran Darknet and all of these images and collected data about what Darknet found at each step. We did this with three different camera angles. The video above camera position that performed the best and is the lowest camera angle we tested with.

Our system did not perform as well as we thought. We collected the model’s prediction confidence of each class that we were hoping to see for the respective object. The result was more sparse than we expected and Darknet had a lot of trouble recognizing that we were holding a an object. When we tested in the past we stood still with the object which greatly improved performance. This test has given us a baseline and in the coming weeks we will retrain Darknet with images we create ourselves and from sites like ImageNet and Github repositories like TrashNet. This should provide a significant performance boost to our system.