Can you recognize what train you’re in from the inside better than a computer?
The two train classes that I take most often in London are the Class 387 and the Class 700.
Class 387 trains operate between Cambridge and King’s Cross, which is the line that my daily commute is on. (They started running here about four months ago. Until about October of last year, I caught a Class 365 — known as the “happy train” — every morning.) You may also know the Class 387s from the inside if you’ve used the Gatwick Express.
The Class 700 runs a number of routes, including Bedford-Brighton, which also passes Gatwick. I take this line to get from St. Pancras to the Three Bridges Traincare Facility (“depot”), one of the places where the Class 700s are maintained.
Once you know what to look for, it’s not too difficult to tell trains apart from the outside. But for a passenger the inside counts too.
We’ll use fast.ai’s benchmark deep learning setup to train a classifier that can distinguish between Class 387 and Class 700 interiors.
To start, download a few hundred pictures from Google Images, filtering for “class 387 interior” and “class 700 interior.” Here are two clear examples.
Copying what we did for high-speed, here’s what we get for a start:
73% accuracy isn’t bad, but lower than what we were able to get for high-speed exteriors. One reason we’re struggling a bit is that the data is mixed: not all pictures show train interiors.
Let’s test the classifier on some pictures and see what it comes up with.
Think of the number above a picture as the probability it’s a Class 700 rather than a Class 387.
The very first picture on the left shows an airline cabin. Our classifier thinks this looks like the inside of a Class 700. Picking up that sort of association feels like a a matter for taste rather than a matter of right or wrong, right? Next in the sequence we have two external pictures, which are correctly identified as a Class 700 and a Class 387 respectively.
Finally, two interiors. I recognize the second from the left as a Class 700 interior, but wasn’t sure about the last one on the right. So I went to look for some evidence to help decide the matter.
The machine is doing reasonably well, but here are some pictures that are mis-classified.
None of the mistakes look outrageous. In fact, there’s a joker in the set. (That’s the technical term for wrongly classified mistake.) Can you spot it?
Two of the pictures include a toilet and we didn’t give our machine many toilet examples to work with. That’s acceptable then. The third picture from the right doesn’t look like an interior from either of the two classes. In fact it’s a Class 800. 800 is closer to 700 than it is to 387, so let’s go with the machine’s association here.
The middle picture is a Class 387 backed up against another vehicle, which we can forgive our machine for not understanding. The other exterior picture of the Class 700 is an unfortunate mistake, but we wanted to focus on interiors, so again we can accept the error. If we added more exterior pictures in the training set that mistake would go away.
The picture on the right turns out to be a picture of Charles Horton who is the CEO of GTR, the company that operates Class 700 trains on the Thameslink route. The algorithm gives this picture a very uncertain score, which seems right given the content.
So what about the joker? The second picture from the left was one that Google falsely found under the the search term for Class 387 interiors. But our machine has rightly got it down as Class 700. So in a small (identifying the interior of some trains) way, our classifier is better than Google now.
Apart from correcting the labeling mistake, we can try to improve our results by increasing the size of our data-set: we take the images we have, modify them each a bit and then add them into the set as new images. This process is called image augmentation.
Here’s an example of how you can take one image and turn it into several new images.
Image augmentation didn’t help much when we were looking at high-speed train exteriors. This makes sense intuitively: the exterior of a high-speed train has symmetric smooth surfaces that are relatively invariant to transformation so you wouldn’t expect the augmented images to help much.
For interiors my expectation is that image augmentation will significantly improve the classifier. There are more shapes in an interior so perspective, zoom, etc. matter. Training with more variants should sharpen the results.
And so it turns out to be: we eventually arrive at an accuracy of 81%. That doesn’t seem bad for a first effort. Here are the ones it is still getting wrong:
Source: Deep Learning on Medium