Voice Assistants in the Black Lives Matter Movement and Artificial Intelligence Course Programs

Original article was published by David Yakobovitch on Artificial Intelligence on Medium

Voice Assistants in the Black Lives Matter Movement and Artificial Intelligence Course Programs

Venture Capital Investments in AI 2020

Tech companies are programming voice assistants to understand the term ‘Black Lives Matter’ in the wake of police brutality on George Floyd and racial discrimination in the USA. Amazon and Apple are leading the pack by training Alexa and Siri respectively on the current protests to make them responsive to human needs.

Voice assistants have come a long way and these latest developments will add value to human interactions. For instance, Siri can differentiate between ‘Black Lives Matter with ‘Do all Lives Matter’ and offer sensible answers on social justice.

Credit: CNBC.com

Colleges are offering artificial intelligence programs and students looking for an exciting field should consider data science and AI courses. Digital transformation around AI, machine learning and data science is showing that demand for these disciplines will rise in 2020 and the coming years. Students should take advantage of this opportunity and enter this field. A course in artificial intelligence requires students who excel in mathematics because of coursework requirements such as approximation, matrices, linear transformations and regression.

The acceleration of artificial intelligence and machine-learning investments is motivating start-ups to develop innovative solutions and hinging AI around their business plans. As the race for artificial intelligence domination heats up, Europe, China and the United States are funding start-ups at a high rate. This begs the question: Are these venture capital investments realistic or out of control?

These and more insights on our Weekly AI update

Artificial Intelligence Course Programs

Given the increasing role of artificial intelligence in the world, students intent on majoring in engineering or computer science may want to seriously consider specializing in AI.

To earn a degree in Artificial Intelligence, you can expect to take a core of computer science classes covering topics such as: Imperative computation, functional programming, sequential data structures and algorithms¹, and computer systems.

Credit: Sift AG

Math requirements will involve coursework in areas such as: Differential and integral calculus, matrices and linear transformations, modern regression, probability theory, and integration and approximation.

For starters, high schoolers eyeing entry into a top #artificialintelligence program should take a full order of honors/AP courses in physics, chemistry, biology, calculus, trigonometry, geometry, and statistics. Additionally, students should take discrete mathematics since this is the foundation of modern day computer science and includes topics such as combinatorics, probability, number theory, logic, and graph theory.

While discrete math is a staple of most high school math competitions, it is not always offered by schools due to the fact that its content is not the primary focus of high-stakes state standardized tests or the SAT.

Some would argue against over-specializing during your undergraduate study given how rapidly technology develops. While this is, in many cases, a wise course of action, AI programs² still provide students with a sufficient dose of general engineering and computer science coursework to prepare them for any type of career in the tech industry.

Venture Capital Investments in AI

Artificial intelligence and machine learning have been getting lots of attention for the past few years. It goes without saying that startups are playing into this trend and raising more money than ever, as long as they have cognitive technologies³ in their business plans.

Not only are startups raising increasingly eye-opening amounts of money, but venture capital (VC) funds themselves are raising skyrocketing levels of new capital if they focus their portfolios on AI and related areas.

But are we in a bubble? Are these VC investments in AI realistic or out of control?

AI is not new. In fact, AI is as old as the history of computing. Each wave of AI interest and decline has been both enabled and precipitated by funding. In the first wave, it was mostly government funding that pushed AI interest and research forward. In the second wave, it was combined corporate and venture capital interest. In this latest wave, AI funding seems to be coming from every corner of the market.

Governments, especially in China, are funding companies at increasingly eye-watering levels, corporations are pumping billions of dollars of investment into their own AI efforts and development of AI-related products, and VC funds are growing to heights not seen since the last VC bubble.

Deepfakes rising at Alarming Levels

Deepfakes have recently emerged as a legitimate and scary fraud vector. A deepfake today uses artificial intelligence to combine existing imagery to replace someone’s likeness, closely replicating both their face and voice. Essentially, a deepfake can impersonate a real person, making them appear to say words they have never even spoken.

Worryingly, the number of deepfake videos online doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later. Deepfake technology⁴ is something that organisations need to be aware of as it’s likely that fraudsters will weaponise this technology to commit cybercrime, adding yet another string to their bow.

With the help of the internet, today we see realistic #deepfakes become more commercial, from use in pornography to infiltration of popular culture and other nefarious practices.

We all remember the Obama administration and the onset of a mass realisation of deep fakes used to spread misinformation on a substantial scale. Going beyond a hoax, and verging on corruption, the use of the technology came under scrutiny as a result. Looking at the proliferation of fake political news today on social media sites, we can see parallels between Stalin’s methods and more modern deception methods.

Machine Learning Algorithms aid Robots

Robots have got pretty good at picking up objects. But give them something shiny or clear, and the poor droids will likely lose their grip. Not ideal if you want a kitchen robot that can slice you a piece of pie.

Their confusion often stems from their depth camera systems. These cameras shine infrared light on an object to detect its shape, which works pretty well on opaque items. But put them in front of a transparent object, and the light will go straight through and scatter off reflective surfaces, making it tricky to calculate the item’s shape.

Researchers from Carnegie Mellon University have discovered a pretty simple solution: adding consumer color cameras to the mix. Their system combines the cameras with machine learning algorithms⁵ to recognize shapes based on their colors.

The team trained the system on a combination of depth camera images of opaque objects and color images of the same items. This allowed it to infer different 3D shapes from the images — and the best spots to grip. The robots can now pick up individual shiny and clear objects, even if the items are in a pile of clutter.

How Weird is Artificial Intelligence?

These days, it can be very hard to determine where to draw the boundaries around artificial intelligence. What it can and can’t do is often not very clear, as well as where it’s future is headed.

In fact, there’s also a lot of confusion surrounding what AI really is. Marketing departments⁶ have a tendency to somehow fit AI in their messaging and rebrand old products as “AI and #machinelearning.” The box office is filled with movies about sentient AI systems and killer #robots that plan to conquer the universe. Meanwhile, social media is filled with examples of AI systems making stupid (and sometimes offending) mistakes.

Shane runs the famous blog AI Weirdness, which, as the name suggests, explores the “weirdness” of AI through practical and humorous examples. In her book, Shane taps into her years-long experience and takes us through many examples that eloquently show what AI — or more specifically deep learning — is and what it isn’t, and how we can make the most out of it without running into the pitfalls.

While the book is written for the layperson, it is definitely a worthy read for people who have a technical background and even machine learning engineers who don’t know how to explain the ins and outs of their craft to less technical people.

AI-Powered App to assess Quality of Tuna

A Japanese chain of sushi restaurants is using an AI-powered app to assess the quality of tuna — a key step in the preparation of sushi that traditionally requires years of training from experienced human buyers. But can it really replace a human’s fish sense?

The app, named Tuna Scope, was developed by Japanese advertising firm Dentsu Inc¹¹. It uses machine learning #algorithms trained on thousands of images of the cross-sections of tuna tails, a cut of the meat that can reveal much about a fish’s constitution.

Credit: Dent Su

From a single picture, the app grades the tuna on a five-point scale based on visual characteristics like the sheen of the flesh and the layering of fat. For an experienced fish grader, these attributes speak volumes about the sort of life the fish led, what it ate, and how active it was — thus, the resulting flavor.

Dentsu claims that its AI has captured the “unexplainable nuances of the tuna examination craft,” and in tests comparing the app with human buyers, the app issued the same grade more than four times out of five.

But sushi experts and fishmongers are a little more cautious about Tuna Scope’s ability to replace fish graders, especially those buying meat for high-end sushi and sashimi.

How Apple is deploying Artificial Intelligence

AI has become an integral part of every tech company’s pitch to consumers. Fail to hype up machine learning or neural networks when unveiling a new product, and you might as well be hawking hand-cranked calculators. But judging by its recent WWDC performance, Apple has adopted a smarter and quieter approach.

Sprinkled throughout Apple’s announcements about iOS, iPadOS, and macOS were a number of features and updates that have machine learning at their heart. Some weren’t announced onstage, and some features that almost certainly use AI weren’t identified as such.

What these updates do show, though, is Apple’s interest in using machine learning to deliver small conveniences rather than some grand, unifying “AI” project, as some tech companies have promised with their own digital assistants⁷, claiming to seamlessly improve your life by scheduling your calendar, preempting your commute, and so on.

This latter project was always going to be a failure as AI, for all its prowess, is basically just extremely good pattern-matching software. This has myriad uses — and some are incredibly unexpected — but it doesn’t mean computers can parse the very human complexities of something as ordinary as your calendar appointments, a task that relies on numerous unspoken rules about your priorities, routine, likes and dislikes, and more.

The best example of Apple’s approach is the new handwashing feature on the Apple Watch, which uses AI to identify when you’re scrubbing your mitts and starts a timer. It’s a small and silly feature, but one that asks little of the user while delivering a useful function.

AI-Powered Noise Cancellation from Google

Google Meet’s new AI-powered background noise cancellation has started rolling out. Google announced the feature back in April for its G Suite Enterprise and G Suite Enterprise for Education customers. It’s coming to the web first, with iOS and Android following later.

An online video shows the software in action, with G Suite’s director of product management Serge Lachapelle demonstrating how it can pretty seamlessly remove the sound of crackling crisp packets, clicking pens, or glass clinking. Google’s announcement said the tech will also work on dogs barking or the clicking of a keyboard.

Google has been working on the feature for around a year and a half, using thousands of its own meetings to train its AI model. YouTube clips of lots of people talking were also used by the team. However, Lachapelle was keen to emphasize that although the feature will improve over time, the company will not directly use external meetings to train it. Instead, it will use customer support channels to try to identify where the software might be going wrong.

Google’s processing happens in the cloud, meaning it can work consistently on a much broader range of hardware. Eventually, this will include smartphones. Lachapelle emphasizes that the data is encrypted during transport, and it’s never accessible outside of the de-noising process.

Voice Assistants and Black Lives Matter Movement

Digital assistants from Amazon, Apple, and Google state their support for the Black Lives Matter movement when prompted. As people have protested in all 50 states and across the world over the death of George Floyd and against racism and police brutality, tech companies have responded by putting out statements of solidarity against racial injustice.

Though not all tech companies or their executives have outright said the words “black lives matter” in their public statements, Amazon, Apple, and Google have programmed their voice assistants to state the phrase.

The #voiceassistants also have responses to the question “do all lives matter?”, referencing a right-wing refrain commonly used to criticize the Black Lives Matter movement. Apple and Google’s assistants more emphatically refute the question than Amazon’s Alexa, which is surprising given Amazon CEO Jeff Bezos last week publicly corrected a customer who angrily used the phrase in an email.

Computer Vision Technology

Sony has announced the world’s first image sensor with integrated AI smarts. The new IMX500 sensor incorporates both processing power and memory, allowing it to perform machine learning-powered #computervision tasks without extra hardware. The result, says Sony, will be faster, cheaper, and more secure AI cameras.

Over the past few years, devices ranging from smartphones to surveillance cameras have benefited from the integration of AI⁹. Machine learning can be used to not only improve the quality of the pictures we take, but also understand video like a human would; identifying people and objects in frame. The applications of this technology are huge (and sometimes worrying), enabling everything from self-driving cars to automated surveillance.

But many applications rely on sending images and videos to the cloud to be analyzed. This can be a slow and insecure journey, exposing data to hackers. In other scenarios, manufacturers have to install specialized processing cores on devices to handle the extra computational demand, as with new high-end phones from Apple, Google, and Huawei.

This first-generation AI image sensor, though, is unlikely to end up in consumer devices like smartphones and tablets, at least to begin with. Instead, Sony will be targeting retailers and industrial clients, which are beginning to use computer vision technology¹⁰ more widely.

Works Cited

¹Sequential Data Structures and Algorithms, ²AI programs, ³Cognitive Technologies, ⁴Deepfake Technology, ⁵Machine Learning Algorithms, ⁶Marketing Departments, ⁷Digital Assistants, ⁸Voice Assistants, Integration of AI, ¹⁰Computer Vision Technology

Companies Cited

¹¹Dentsu Inc

More from David Yakobovitch:

Listen to the HumAIn Podcast | Subscribe to my newsletter