I don’t get machine learning papers.

Source: Deep Learning on Medium


🤬 Unordered rants series.

I’m a machine learning practitioner. Or is it deep learning researcher?… Err, AI guy? I see a lot of labels out there for sure. Researcher, scientist, whatevs… isn’t machine learning, AI and deep learning kinda same anyway? I digress.

Source: biotrustid.com

Clearly I am a machine learning something. Started about 2 years ago (academic), the field is deep and wide. It feels like a league. You got the G.O.A.T.s like Hinton, Bengio, LeCunn, Ng (how do you pronounce it?) et tuti quanti. Then there’s the guys making the headlines like Fei-fei, Sutskever, Karpathy, Chollet, Dean, Goodfellow, etc. At least for me, that’s how I see the landscape.

Of course there’s a bunch of other people I didn’t mention or haven’t heard of (please don’t scream at me). And there’s me, the one looking for his way out the hard way… so why do I have to understand what the Herfindhal-Hirschman index is? Why does the Hellinger distance look so much like the Euclidean distance? Can’t we just call it that? Where does it say it’s a MUST to use equal size square images for training a Haar-like classifier? Why does the specific part of my code that implements an algorithm take so many arguments, most of them already initialized (everybody saying “you don’t have to worry, the defaults should work just fine”, or — my fav, “just use it as is, the rest is taken care of”)? At what point does a frozen model stay frozen, like, for good?… wait, what? Yeah!… The other day I’m watching a demo and this guy just literally tore a model apart, talking about autograd like it’s nothing. When is PyTorch 1.0 being released anyway?

And this is just the surface, mind you. Because there is the research papers. Fast way to learn, they say. Read two per week, and do the experiments, they say. First of all, no scientific proof, or math deduction has ever got anybody understanding what was inside the mind of the author(s), none! It’s just lies.

I have gotten interested in GANs lately (here we go again). I. Don’t. Get. It. But then again I see the trick. See, my mind picks concepts in a different way, not through scientific proofs. I get an idea or concept better if I can see and listen to its author explain it, and look out for the parts where he starts with “let me explain it this way”, or “it’s like”. Then I go: “Ooh”… The best way I have come to explain it in clear terms is that I should literally pick the author’s mind before reading their material. Talk about borrowing a leaf? I need the whole damn tree. Plus, let’s be honest: writing a research paper is simply sticking to centuries old practices of adding a tiny bead on a long, long strand of knowledge. Reviewers only care about their format template and scientific lexicon. And the references. Does anyone ever read (completely) any of the material in the references section? Really? Authors and readers alike, give us a break. That was fine 2 centuries ago when all references were like 5 or 10 bullet points. This is 2018, guys, easy with the references already. It turns out that the paper that introduced us to the magic of Dropout was rejected! Do you get my point?

I got my first paper published though. I remember when I stepped forward to present I was nervous I was going to be called out for such poor quality work. All I got was a “good subject” mention from a Turkish professor who thought I was a PhD student. Is that a good thing? Did I get it right? Anybody, somebody… please tell me how I did so I can move on to learn more. Is there a place I can go to show my work and have an honest opinion into what my mind made up? As a good student, I used a lot of jargon, just like I see others do it. It made me realize how it makes me the only person who understand what I wrote. Is that how G.O.A.T.s are made? Is it possible to reverse-engineer a paper by just reading it? I saw a video by @GeorgeHotz, the #jailbreakbae (should we call him that?), but it’s like he turns his OBS stream on and forgets it. I can’t sit through 5 hours, bro!

Thank you arxiv sanity. You complete me.