Defining Success in a Deep Learning world

Curious things these are — systems that use Deep Learning to inform their operations. Oh, they’ll mostly work, and very well at that, but every now and then they’ll do something that you weren’t expecting.

The key part in the above sentence is the word “you” — as they say in the world of UX, you are not the user. Oh, you might be the designer, or even a user, but remember, there are probably many many other users out there, each of them with their own (subjective) wants, needs, desires, expectations.

So yeah, the ML system you are using has been designed to tease out the commonalities across all of these. It is, however, not about the lowest common denominator here. It is, instead, about internalizing
• the “fuzziness” associated with predictions: “I usually want a cup of tea at 4pm, but every now and then I’d like a cup of coffee”
• our lack of knowledge about ourselves: “I thought I wanted a cup of tea, but I actually prefer this cup of coffee I was given by accident”
• the subjective nature of “success”: “I’d rather have had tea, but I’m also OK with the coffee (and, in retrospect, I’ll appreciate the tea more tomorrow!)”

Predicting and understanding our wants and needs (especially what you should want and need) is a really, really big unknown — philosophers have been working at this for millennia, and agreement is, well, still pretty lacking. Expecting ML to magically solve this is a sucker’s bet. Instead, to paraphrase the folks at Google Design,

“It’s precisely this fuzziness that makes ML so useful! It’s what helps us craft dramatically more robust and dynamic ‘if’ statements, where we can design something to the effect of “when something looks sort of like x, do y.” And…it’s about co-learning, and adaptation over time”

/via http://existentialcomics.com/comic/197

Source: Deep Learning on Medium