Incentivised Multi-Target, Multi-Camera Tracking with Untrusted Cameras (Part 2)


Grassland node-lite

Some friends of mine (Bitaccess) asked if they could run a Grassland node so I’m building them this “lite” node. They couldn’t dedicate an NVIDIA GPU to just run Tensorflow’s Detection API so I had to find a way to streamline a node — there are no other nodes in their area so it’s not as if they’re competing for rank (yet).

The photo is of a Raspberry Pi 3 Model B with a tiny camera on it. It’s powered from an outlet (top cord) but you could easily attach a battery or solar panel. The bigger cord is just an HDMI cable I temporarily attached to program it from my TV.

I changed the architecture to get it running on regular computers without powerful graphics cards.

What’s going to happen is that this Pi will just do motion detecting (OpenCV), tracking and predicting. When it detects motion, it’ll send a “detection request” as an image frame to a cheap Digital Ocean server running the ‘faster_rcnn_inception_v2_coco_2018_01_28’ model of Tensorflow’s Detection API to help it detect and identify it’s seeing.

While waiting for the server to respond, the Pi will record and track the objects in motion. When it receives the precise bounding boxes of the objects in frame from the server, another thread will rewind the video and track the objects across the video until it loses track of them. Upon which, another “detection request” will be sent. If there are no more objects to track, it will just idly record in a loop until there is.

I’ll be putting all the code up on Github shortly so anyone can set up their own node.

For this Raspberry Pi implementation, everything: the 1080p camera, WiFi, Ethernet, and 4 USB ports is inside that little plastic case. The Pi and camera cost me $160 CAD with tax but I paid local, retail price. But you could still do a decent job using a ten year old PC and a webcam and at most it’d cost you $50 if you didn’t already have them.

In this case however, I wanted powerful enough components till I knew the least amount of resources I could use and still effectively run Grassland’s protocol but I also thought it’d be nice if they had something small, sleek and unassuming. The Mickey Rooney of nodes, if you will.

Just in case you were wondering, here’s the type of data that a full implementation would give you…

The type of data a full node would provide

Source: Deep Learning on Medium