DEEP LEARNING IN ALL ITS GLORY

Source: Deep Learning on Medium

DEEP LEARNING IN ALL ITS GLORY

Deep learning is a subset of Machine learning, which is a subset of Artificial learning. It is a system that uses neural networks, a collection of algorithms, to make decisions based on data the machine has gathered throughout its years. Deep learning is was crafted from inspiration of the human brain; it is also known as an artificial neural language. In this process the machine categorizes based on what it knows. To clarify, deep learning takes information we input and saves it for future predictions and decisions it will need to make. In the past couple of years humanity has begun to notice the updates, new advancements, and major changes that have been happening in the technology world. As the world of technology advances, obviously deep learning has to follow. With artificial intelligence in robots, cars, and even in the workforce growing and growing every year; deep learning continues to be involved. The technology we used to see in movies only, and probably never would have imagined it in our real lives, is now upon us.

TRACING THE HISTORY

What most people think is a new idea, has been advancing for about 60 years now. Deep learning dates as far back as the 1950s, specifically 1958 when Frank Rosenblatt, a cornell psychologist, unveiled the perceptron, which is “a single-layer neural network on a room-size computer” or as Rosenblatt called it, “a probabilistic model for information storage and organization in the brain.” In other words, it was a, “prototype neural network.” (Roger) Frank Rosenblatt believed that there were three fundamental questions to answer in order to, at least, begin to understand higher organisms and their capability. Those questions being:

  1. How is information about the physical world sensed, or detected, by the biological system?
  2. In what form is information stored, or remembered?
  3. How does information contained in storage, or in memory, influence recognition and behavior?

Thus began his research, and the design of the perceptron came to be.

The organization of the perceptron was based on how an eye worked. Starting at the retina, then leading into the projection area where many different types of connections are being made from what was seen; then it enters the association area where random connections are made. Last but not least we enter the responses, which is where, after choosing between the different options and putting knowledge of what it knows together, a final solution of what was seen is made. All this great knowledge and observations foreshadowed what we know today as deep learning. However, in the year of 1969 Marvin Minsky co-wrote a book title, Perceptrons, that killed the idea of neural networks and the perceptron itself.

The idea couldn’t stay quiet for too long though because once 1986 rolled around more discoveries were being made. Geoffrey Hinton, a graduate from the University of Edinburgh where he studied Artificial intelligence, worked alongside others to create a way in which neural networks could retrace the processing step in order to correct mistakes that occurred along the way. Though the world thought they were crazy the expedition moved forward. Hinton argued that inspiring AI with logic, which was how it was done in the past, is not the right way to take it on because neural nets worked more like intelligence than logic itself. (Roger) The Perceptron worked in one layer of neurons, which made limited use, but Hinton shared a new idea for a multilayer neural network. To explain the multilayer idea he uses the description of neural network interpreting photographs through many different layers of units; in which more and more details in the photograph were detected by each section as it journeyed through the layers one after another. (Roger)

What Hinton was trying to figure out was, not a way to go through each layer because that was already happening, but a way in which there could be another network that traveled backwards. Therefore, if a mistake happened in the process, the neural networks could go back and restore the problem. Along with two other colleagues, Hinton wrote a paper about a solution addressing this issue, and according to Yann LeCun “His paper was basically the foundation of the second wave of neural nets.” (Roger) Yann LeCun went on to doing foundational work at AT&T Bell Labs, that is actually still being used today. Despite all the discoveries and advancements that happened during those few years, neural nets fell back into the deep freeze. In the year of 2006, Geoffrey Hinton came back and with a new idea, which changed the name from neural networks to deep learning.

It was the year 2012 when the big breakthrough happened. After two of Hinston’s students won Fei-Fei Li’s contest to “incentivize and publish computer-vision breakthroughs,” (2016), deep learning became clear to everyone. Earlier that year, Google Brain released the “cat experiment”, which was where a set of neural nets were shown millions of unlabeled images, and one had specifically trained itself to recognize cats. Thus starting the research on unsupervised learning, which still remains partly unknown even today. Since then deep learning has been quickly advancing from photo search being improved by google to neural nets outperforming humans and defeating world champions.

Many believe that we are in the midst of the The Great AI Awakening, but honestly the main concepts, that deep learning derived from, have been around since the 80s and 90s. However, in the past five to seven years we have discovered massive labeled data sets and GPU (graphic processing units) computing, which has made major differences in the world of Artificial Intelligence. The creation of opportunities for machines to self-compute, create observations, and come to conclusions based on previously gathered data is what we know today as deep learning.

INTENDED & UNINTENDED USES

The idea of neural networking, or deep learning, came from the multilayer neural networking, mentioned above, concept that Geoffrey Hinton brought up. A system in which technology, itself, could make more complex decisions through different layers based on knowledge it has saved from the past. For example, if given a black and white picture and asked to add color to it, the neural networks could use past knowledge to know what color each item in the picture should be. This “small” idea sparked such a fire into the technological world we live in today, and now is a major source inside of it.

The Affordances — Intended Uses

Deep learning is being used in many areas today — from tracking bird migrations to medical machines and so on. The first intention was military use solely where a large computer, probably the size of one wall in a room, used neural networks for military means. Through the years, deep learning has grown to become much more than that.. As Chris Nicholson says in his short newsletter, “Deep learning excels at identifying patterns in unstructured data, which most people know as media such as images, sound, video and text.”

In just our homes alone deep learning has begun to “take over”. A great example is the all too well known, facial and voice recognition. Our machines (aka smartphones) remember our face, our voice, and sometimes our fingerprints that way our information seems more secure. This is a big example because it shows exactly how neural networks work because, once again, deep learning takes information the owner or the world has given it and keeps it for future references; such as keeping your voice in order to know who to listen for when given directions. Another example of deep learning at home is the recommendation engine, which can be found on Netflix, Youtube, Amazon, Spotify, and most social medias. The machine takes into account what the owner has looked at, watched, listened to, clicked on, posted, etc and creates a recommended section where certain posts, blogs, movies, shows, podcast, songs, etc are listed based on the owners, what seems, interest.

Home is definitely not the only place that deep learning is used. In fact, as most people know, it didn’t even start there. It’s also in much of the medical, military, and the business world; along with so many others. Most corners of the world, whether a company or a lone person, uses deep learning for these certain reasons. This can be text sentiment analysis, in which based on the comments as a whole, the machine infers what is most common in all those peoples ideas. Deep learning is also used for marketing research, which is open not only to businesses and companies, but also to a random human scrolling through the internet.

The Constraints — Technological Limits

With most uses — comes limits. A human is able to learn what something is, how something works, and so on; based on just being shown once or twice. A machine, on the other hand, comprehends what something is or looks like after a good amount of people make it obvious, it is only after all those times that it can then save it for future references. But even after that, they have no idea how they come to the decisions that they do. The neural networks work to pick from their wide variety of “knowledge” in order to pair the objective with the right one; however, they don’t understand how they do this.

Deep learning also lacks common sense. It is not able to comprehend things based on common sense, such as animals and humans, hopefully, do. In an article titled The Power and Limits of Deep Learning it is stated that two questions still need to be answered in order to reach the “full capacity” of Artificial Learning, and those questions are:

  1. How can machines learn as efficiently as humans and animals do?
  2. How can we train machines to plan and act and reason, not just perceived? (2)

As you read this, scientists are attempting to find ways where neural networks don’t just gather information based on past data, but can perceive through observation. And if that wasn’t enough, they are working on creating machines that are able to have a predictive world model. This way, machines can predict what’s most likely to happen next. Such as a human driving, who comes upon a curve in the road that happens to be on a cliff. They would know to turn because not only does the road that they’re following turn, but if they didn’t turn the steering wheel, they would fall off the cliff. Humans and animals are able to have and use this common sense, which in turn gives us direction, predictions of what happens next, and a somewhat “guide” to live by. As stated above, scientists are trying to work on a way in which machines are able to have this sense of a predictive world model, and be able to make predictions to get around (or to help humans get around). The problem that they have been facing is that a good amount of the time, the world is unpredictable, so how is one to build a machine that can predict when those predictions are happening based on what we “perceive” might happen? One example of working to build machines that are self sufficiently able to predict about the world around them is the self-driving cars. I don’t know what you believe, but I think that our humans predictive world models are used a good amount when on the road.

Hidden Features

Obviously, based on the above, deep learning is open to anyone, any company, and so on; however, there are some instances in which a certain type of deep learning is only used for certain objectives, and can not be used for just an individual. For obvious reasons, medical machines, military uses, and random businesses are a few of those that use those “hidden” resources of deep learning, that are only available for certain advanced users and uses.

The big deep learning era happening at this moment in the medical field is the use of neural networking in medical imagery, such as for X-Rays, CT, and MRI scans. In Deep Learning for Medical Image Processing by Muhammad Imran Razzak, Saeeda Naz and Ahmad Zaib it states, “Now deep learning has got great interest in each and every field and especially in medical image analysis and it is expected that it will hold $300 million medical imaging market by 2021.” (3)

The military has used (and will continue) a system titled Artificial Neural Networks, which uses the brain to inspire how the neural nets inside of the machine works. Therefore including deep learning networks to route around the “brain”.

EFFECTS, OPPORTUNITIES, AND FEARS (Effects, Opportunities, and Fears)

Artificial Intelligence has made many advancements in the past couple of years, and these advancements are due in thanks, mainly, to deep learning. More and more developments are being introduced and improved in the realm of deep learning, and as these growths occur more ideas emerge. There are amazing opportunities that prove deep learning a great tool. Humanity, however, has created a fear towards these new ideas because, as Uncle Ben once told Peter Parker, “with great power comes great responsibility,” and the question is — do we have that responsibility to know how far these advancements should go?

As more and more advancements are made, ideas are created, and developments are introduced in the realm of deep learning; fear creeps into the souls of humanity. Numerous debates, ideas, proposals, speeches, etc, based on what technology is doing and will do for the world and if it is a good or bad thing, have occurred around the world. Now, scientists are trying to find ways in which technology and artificial intelligence can be just as smart as humans and animals; having the capability to predict what will happen next and how it will happen. Having, therefore, the closest to a human brain that is possible. Sounds cool right! Not everyone thinks so. This can lead to neural networks not just being part of a professional world, but taking over part, if not most, of careers in our world today. Thus creating fear and stress in the lives of functioning humans; who respond with questions, such as, what does this mean for me? What will the world be like if robots take over every human responsibility? How will these machines know basic common sense in the workforce? Not only does it create panic for those in the workforce, but also for consumers who take part in the professional realm.

Some facts from different resources, researches, estimations, and experiments have been concluded. The Oxford Economics posted an article titled “How Robots Change the World,” and in that article they stated, “The rise of the robots …will lead to….existing business models in many sectors …seriously disrupted and millions of existing jobs will be lost.”

It is believed that the most susceptible careers that will be taken over by machines are clerics, secretaries, schedulers, and bookkeepers. World Economic Forum wrote a report about the future of jobs and in the report is a chart on page 16 that shows different industries, and their demand for certain technologies. They state, “Robotic technology is set to be adopted by 37% to 23% of the companies surveyed for this report, depending on industry.” They go on to write that, “Skills gaps among the local labour market are among the most cited barriers to appropriate technology adoption for a number of industries.” Which gives us another perspective, where the idea for robotic technology and using deep learning machines, will mainly only affect jobs where those certain skills are needed. Not jobs where those skills are already being used. One would want to believe this, but is it true?

Skynet Today posted an article arguing that the effect of artificial intelligence in the workforce will not be more disruptive than the changes it created in the past. This chart was shown in their article and supports their facts about the loss of agricultural jobs thanks to new machines. The article goes on to mention that when farmers and other agricultural workers lost their job, they were helped in transitioning to a new job.

Political Economist Joseph Schumpeter created the name Creative Destruction which described is, “the process of technology disrupting industries and destroying jobs, but ultimately creating new, better ones and growing the economy.” In the article “Job loss due to AI — How bad is it going to be?,” Skynet Today’s authors describe how there is a process to this, that being:

“ 1. Technology-enabled automation displaces some workers and augments others.

2. Displaced workers transition to new jobs, some of which are created by automation. The government helps to facilitate this transition via investments in training and education.

3. Increased productivity raises incomes, lowers work hours (average work time in the U.S. has fallen more than 50% since the early 1900s), and lowers prices, creating more demand for goods and services, leading to more jobs and broader economic growth.”

It has also been shown that in the past the increase in automation has actually created more jobs. It has also been said that dangerous jobs may eventually also be replaced by machines; therefore creating a possible “happier” planet with less harm being done. And with technology doing more jobs, more people will have more time to enjoy life, family, and friends.

These facts don’t change the reality that money is an object in life, and a lot of objectives in life deal with that object. They say we will get more time to “enjoy” life, but will we be able to enjoy it if some of us are stuck living on the street? Will the past really predict the future, in which advances in automation will most like actually create more jobs? Or will humanity fall off the cliff because they believed in the idea that the past will predict the future? This is why so many have a fear of what is to come in the AI realm. That is why so many have written articles, books, created speeches and spoke in front of others — in order to grasp the attention of the world in seeing that we could be stepping into the no return zone. What is to come though? Maybe technology will advance for the better. Maybe machines having more knowledge won’t be as bad as we make it. Whatever the case, it definitely sets off a fear of technology, that has long been a part of us.

PROJECTIONS OF THE FUTURE

For about the past 60 years, deep learning has been advancing and growing. With new ideas and advancements every year (it seems like), deep learning is creating a new world for us. The history of neural networks has given us an insight into where we came from, but can it also show us where we are going or, at least, give us an idea of the future?

Rosenblatt believed that in order to understand higher organisms and how they function in the area of perception and thinking, the three questions mentioned earlier, needed to be answered.

One of today’s main focuses for deep learning is to try to find a way in which a machine can hold the key to the world of perceiving and decision making based on observation. Thus these questions from 1958 are key into the future that scientists want to make of deep learning — especially looking at it from a “how humans and animals observe” lens.

As we go further into the future bringing along advancements, new ideas, and discoveries; how will deep learning affect our lives? What will our planet be like? Will there be major changes? Will the past effects repeat itself? We probably won’t know for sure, but we can certainly gather conclusions from the past in order to create different hypothesis’ of possibilities. Ultimately the real question becomes, how can we implement the past effects and outcomes in order to question what is the possible outcomes for deep learning in society today?

Most people fight against the growth of deep learning because of the possibility of jobs not being needed since new technologies will probably be able to perform those duties; therefore, throwing everyone into discourse and being the very reason many jobs are lost. Some say that, in some way, artificial intelligence was part of every industrial revolution that came about. Guess what happened during every period that was considered an industrial revolution. Certain jobs were not needed anymore, people were laid off, and everyone was thrown into discourse. Do you know what happened after that? Those who lost their jobs received new ones, and they ended up pulling themselves (or being pulled) out of discourse.

In Skynet Today’s article, Job loss due to AI — How bad is it going to be?, this exact possibility is discussed. “AI is sometimes characterized as part of the ‘Fourth Industrial Revolution’. Today, most economists agree the prior industrial revolutions ultimately benefited society as a whole, even though they did result in some losing jobs to automation in the process …Figure by World

Economic Forum, seen on Fortune (above).” They go on to describe the process of how new technologies have affected the job force. Mainly stating that after the technology replaces workers and lessens the work of others, the workers are transitioned to new jobs (sometimes a job that was created because of the new automation). In all, it leads to higher incomes, lower work hours, creates more jobs and more space for economic growth.

Based on the past, there were small moments of hardship, including the loss of jobs that we are scared of, but in the long run it turned out to be a great idea; which created more positives than negatives. So yes, there will be a chance of harm, specifically for certain jobs, but what if, in the end, it turns out to be a great idea?

The article, noted above, discusses that there is a barrier blocking the ability to apply advances in certain technology uses, without having the needed human input. There is still so much that needs to be done before a machine is “capable” of doing what a fully functioning human, or even animal, can do. Most don’t even know if it is possible to have a machine reach that capacity. So the question now resumes as, “do we even need to worry or fear the idea of machines taking over?” And based on the past histories, most of the advancements, in this realm, are based on past advancements and just give a little boost to those discoveries made before hand. If we do seem to come upon a discovery where a machine is able to completely function as a human (even in thoughts), will we have a crisis on our hands? Based on the past, the advancements, in deep learning, that have occurred have only created an outcome that has become the best for society. The way deep learning is flowing, one can infer from past observations, that if we do lose more jobs it won’t be a huge sweep across the world but only simple ideas; which could possibly, in turn, create more jobs, better economies, and higher incomes with shorter hours. The outcomes and impacts will, most likely, not be as dangerous as we fear it will be.

CONCLUSION

Deep learning is around us, everywhere, and it continues to grow, prosper, and advance in multiple ways as we talk. Scientists around the world continue to explore new ideas and build upon those in order to create new outcomes using deep learning. As we talk, they are working on advancing what deep learning can do — from just making decisions based on what the machine has learned, to making common sense predictions based off of observations. Where we are now, may most likely not be the stopping point for neural networking because as it’s been said, we haven’t found its full “capacity”

Citations

Parloff, Roger. (2016). The Deep-Learning Revolution. Fortune, 174(5), 96–106. Retrieved from http://search.ebscohost.com.concordia.idm.oclc.org/login.aspx?direct=true&db=bth&AN=118302290&site=eds-live

LeCun, Yan. (2018, November 1). The Power and Limits of Deep Learning. Retrieved from http://eds.a.ebscohost.com.concordia.idm.oclc.org/eds/pdfviewer/pdfviewer?vid=44&sid=16ef59f1-f63a-4323-a9b8-5e87cc69da3e%40sdc-v-sessmgr01.