Deep Introduction to Artificial intelligence.



1.Why Artificial intelligence?

2. What is Artificial intelligence?

3.Artificial intelligence (AI) Machine Learning (ML) Deep Learning (DL)

4.Types of Artificial intelligence learning paradigmas

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning

1.Why Artificial intelligence?

Artificial intelligence will shape our future more powerfully than any other innovation of this century. The rate of acceleration is already astounding. Over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years.AI can literally solve every problem humans have now, like climate change, renewable energy sources, scarcity of food and poverty in some countries, finding the inhabitable planets for humans and the list goes on.

AI was already bumped into every field in the world, it diagnoses cancer, create drugs, in the medical field. it predicts stock price in stock exchanges, predict the sales of the product, market analysis to help in finance, It can create art and music, it can drive your cars. So we learn AI for solving the world greatest problems. And of course, AI is the 1.3 trillion dollars industry in 2018 from across the world.so that we can bag a lot of income from this technology.

2. What is Artificial intelligence?

AI is a broad area of computer science that makes machines seem like they have human intelligence.

Every aspect of how we humans learn and how we develop intelligence can be so precisely described to a machine then that machine able to simulate it. They have to form abstractions and concepts like humans and solve kinds of problems now reserved for humans, and improve themselves. Those machines are called AI.

AI is not the idea of programming the computer to solve the problem, it is the process of teaching the computer to understand the problem and then it has to solve the problem by itself without human interaction and while solving those problems it has to improve its intelligence.

For example, if we hard code a machine to detect a chair in an image, we say that rules like it has four legs, wooden texture. But some chairs have 3 legs and made of plastic and different materials. We can’t code every representation of chair to the computer.

So that’s why we gave an idea about what is a chair and what is not a chair, then from that concept, the machine needs to develop its intelligence about what is chair and what is not.

Applications of AI:

We use AI technologies in our everyday life without noticing them. The spam filter in our mail, search engines we use daily, virtual assistants like Siri, Cortana and google assistant these all are powered by artificial intelligence algorithms. These are the small examples of ai but, ai can also use for handle more heavy problems like Natural language processing, Translation and Chatter bots, Computer vision, Virtual reality, and Image processing, Game theory and Strategic planning, Photo and Video manipulation, Face recognition, Handwriting recognition, Expert diagnosis and so on.

Hierarchical view of AI

3.Artificial intelligence (AI) Machine learning(ML) Deep learning(DL)

  • Artificial intelligence (AI) and Artificial general intelligence (AGI)

Anything that created by humans which is already present in nature is artificial. Intelligence means have the ability to reasoning, planning, executing, and thinking. This “artificial intelligence” idea was first coined by American computer scientist and cognitive scientist John Mccarthy in 1955 at Dartmouth conference. Before him, other scientists also worked on developing AI but it does don’t have any name at that time.

The Turing test:

Alan Turing (1912–1954) was an English mathematician and logician and considered to be the father of computer science. Turing was fascinated by intelligence and thinking, and the possibility of simulating them by machines. Turing’s most prominent contribution to AI is his imitation game, which later became known as the Turing test.

In the test, a human interrogator interacts with two players, a human and a machine by exchanging written messages (in a chat). If the interrogator cannot determine which player is a computer and which is a human, the computer is said to pass the test. The argument is that if a computer is indistinguishable from a human in a general natural language conversation, then it must have reached human-level intelligence.

The development of ai was initiated in the 1950’s, but why it’s getting hype now?. Back then in 1950, we don’t have faster computer chips and lots of data to train our algorithms. AI algorithms required lots of computing power and lots of data to train. Because of this AI field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. This time was called as AI winter by some historical researchers in the AI field because of reduced funding and interest in artificial intelligence research.

After five decades of initiation now we have computing power need to train our algorithms and petabytes of data. Did you know 90% of the world data was created alone in the last 2 years?

Now we train our algorithms on supercomputer and clouds with large dataset’s. And GPU (graphical processing units) are introduced in computing and they play the main role in the field of AI.

And some companies like Nvidia and Intel develop some specific hardware to work on AI algorithms.

Artificial General Intelligence (AGI)

It’s the feature of the ai, it has the human-like ability to do any task done by the humans and, it can do what humans can’t do. If we develop AGI it merges with nature and does some amazing job. With ai we only solve some particular tasks, for what that ai is made for. But in AGI it thinks what tasks need to be done and it decides its own tasks and goals and acts according to it.

Example of AGI:

For example, if we programmed AGI for climate control, it uses the sensor to detect wildfires and automatically launch drones for eliminating those fires, it estimates pollution on some areas and plants some trees where they are needed, and it’s clean plastic from oceans with some autonomous robots.

They all do not need human interactions for control, everything was monitored by AGI and it takes actions according to the problems created by humans(we humans now throwing all our plastic to oceans, if AGI is happen it clean oceans for us).

Ai is the very broad area which covers also machine learning and deep learning. To implement our current AI technologies we use both machine learning and deep learning algorithms.

Machine learning (ML)

Machine learning is the subpart of the AI. Ml uses statistical tools on large datasets to gain the information of the data and develops its knowledge for future use.

For example, we have to predict the gender by giving data; we have given that algorithm height, weight, hair length. Then the ml algorithm automatically learns, if hair length is high they tend to be female. An Ml model takes large data as input, and designed such way, they can learn and develop their knowledge when they are exposed to new data.

In our course, we learn some ml algorithms like linear regression, svms, and decision trees.

Deep learning (DL)

Deep learning is inspired by the biological structure of the human brain. Our brain was the amazing structure made up of millions of neurons. Each neuron on the brain has its own functionality like detecting the human face, sensing the heat and so on. Each neuron takes electric pules from other neurons as input and generates electric pules as output to other neurons. In our brains, those all neurons are connected to each other’s and pass the information across the neurons network in the brain.

We take this biological structure as the example and design artificial neural networks.

We develop lawyers of neurons which are connected to each other and passes the input data across the neurons network and accomplish the task with help of all the neurons in the network.

Those neurons on our Deep learning is nothing but some activations functions, they take some input and perform activation function on data and generate output. After a neuron generated output, that output fed to next neuron as input.

4.Types of Artificial intelligence learning paradigmas

AI algorithms are categorized into 3 types and they are.Supervised learning and unsupervised and Reinforcement leering.

Supervised learning:

In supervised algorithms we gave the algorithm both input and output in process of algorithm training, after the training period we perform the test on algorithms then we only give the input then algorithm need to generate the output.

The algorithm learns form input and output, and also what is the output it supposed to generate for the new input it’s never seen before. That means the algorithm not only learns from the input, it also develop knowledge from the output which is given along with the input in the training phase. The output we given to the algorithm guides the machine to learn how the output can be generated for new input data.

Unsupervised learning:

When coming to unsupervised learning we give only input to the algorithm, from given input in generates the clusters as output. That means it groups the given data based on similarities and returns the clusters.

Unsurpervised learning is used when generating the data with both input and output is expensive for training our model or output of the problem is not available.

Reinforcement learning:

This is trial and error type of learning, the machine learns from previews errors it already made and improve its performance.

In this type of algorithms the machine works based on reward, it gets for solving a problem. It always tries to get the maximum amount of the reward.

We gave the machine to a task and its generate output then we give some reward to the machine, after that the machine takes previous output as input and learn how to complete that task more effectively so then it’s got height reward than previews one.

Source: Deep Learning on Medium