Great article but I am confused on several points.

Great article but I am confused on several points. When precisely does the machine/computer ‘learn’ anything? I understand parts 1 and 2 from your article, but in part 3.1 you talk about ‘learning’ curves which look exactly like regular error curves that a non-learning machine (the only kind that exist) might generate to show if bias or variance are inherent in a given data set. For instance if it did not have enough input data or had the wrong type of input data. I am also confused by 3.2 where you talk about how to determine if you need more data and talk about if it will help to improve the “learning” algorithm. Can algorithms learn now too? I was under the impression that learning was limited to machines (they cannot learn) and humans and certain ‘intelligent’ non human animals (though that is debatable).

Since this is the first time I have seen the term ‘learning algorithm’ I am not sure if it is a logical contradiction or logically impossible or both so let’s put it to the test. First we have a look at the definitions of each of the words that make up the term.

learn·ing

ˈlərniNG/

noun

  1. the acquisition of knowledge or skills through experience, study, or by being taught.
  2. knowledge acquired through experience, study, or being taught.

Ok, sounds good, now let’s look at algorithm.

al·go·rithm

ˈalɡəˌriT͟Həm/

noun

  1. a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

Now we ask if their is anything inherently illogical about combining those two words to make a new term by combining the definitions. Taking the first definition of learning we see that a learning algorithm would be a process or set of rules to be followed in calculations or other problem-solving operations that acquires knowledge or skills through experience, study, or being taught. On the plus side I see no logical contradiction between these two things unlike in the case of machine learning, however on the minus side it is absurd, nonsense, and logically impossible. A process or set of rules is not capable of acquiring knowledge anymore than a machine that follows those rules is. Nor is said process capable of acquiring skills, it also cannot study. I might grant that one could consider programming input data “teaching” if I am being generous, but in no way can a process or set of rules be taught since to be capable of being taught, one must be capable of learning, which a process or set of rules is clearly not as I have just shown.

Would very much appreciate if you could clarify my confusion on those points.

Source: Deep Learning on Medium