Artificial Intelligence vs Neuro-Linguistic Programming

Original article can be found here (source): Artificial Intelligence on Medium

Artificial Intelligence vs Neuro-Linguistic Programming

“When you talk, you are ‘ONLY’ repeating what you already know, But ‘IF’ you listen, you may LEARN something new” — Dalai Lama

An existential statement, isn’t it?

As the world comes to standstill and slows down, I wanted to shift my focus from news overload and to reflect on my writing — I had parked away some many notes that were made when they occurred — most of these were sitting on my phone’s notes. Hence, utilizing this downtime for some useful sharing and discussion. So, here you go….


NLP — Neuro-Linguistic Programming (*read again — no, not AI’s NLP)

AI — Artificial Intelligence

All of us have heard about Artificial Intelligence, but maybe not many know about Neuro-Linguistic Programming. This post is about bringing out some broad similarities, how both these models complement each other yet so different and moreover there is so much to learn from each of these models to make the other better.

I come with experience in both, NLP — Neuro-Linguistic Programming [I studied and practice NeuroLinguisticProgramming for over 5yrs now. Being a certified NLP Life Coach, I work with people to reprogram their mental maps and models of the world (a very niche process). In this process, I get to study different behaviors and models — which includes human values, beliefs, limitations, mental maps, language, behavior patterns, etc.] and AI — Artificial Intelligence [I have worked on deploying AI models in my career — traditional modeling to continuous monitoring and performance optimization addressing — customer service, virtual agents, and information processing, all adapting to the ever-changing trends in data and processing (especially when users constantly change their interests and click-through rate cannot reflect the retention rate of users)]

The common factor between these two subjects is ‘human behavior patterns’. The patterns are modeled to attain desired results. If one pattern doesn’t work, we look for another and better. It’s a continuous process of evolution of problem statement and desired results. It’s not like, you build a solution and its solved. There are data inputs and there are data outputs based on the input. Let us look at both these models independently –

Neuro-linguistic Programming:

According to, Dr. Richard Bradler co-creator of NLP defines as — “A model of interpersonal communication chiefly concerned with the relationships between successful patterns of behavior and the subjective experiences (esp. patterns of thought) underlying them.”

The NLP modeling process involves finding out about how the brain (“neuro”) is operating by analyzing language patterns (“linguistic”) and non-verbal communication. The result of this analysis is then put into step-by-step strategies or programs (“programming”) that may be used to transfer the skill to other people and other areas of application.

The three central components that bring out NLP’s core framework (but not limiting to this) is:

  1. Subjectivity — our experiences of the world leave us to form subjective models of how things are. These experiences are constituted in terms of our five senses (visual, auditory, kinaesthetic, olfactory and gustatory) and the language we use to think and talk about the experiences — which in turn forms human behaviors. Now changing these sense based representation and the language will give a different or new human behavior
  2. Consciousness — The 2 basic components of NLP are — the conscious component and the unconscious component. All of us experience things in our unconscious mind, and then these unconscious representations affect our conscious behavior.
  3. Learning — When we get deeper into the subject we will recognize how important learning is — it’s an imitative behavior, which is called modeling. The theory states that imitative learning can codify and reproduce any desired behavior (and modeling is inherent human nature)

Artificial Intelligence:

I don’t have to define AI for this audience but in a gist for this discussion — Its human intelligence, a computer programming to imitate human thoughts and actions by analyzing data and surroundings, solving or anticipating problems and learning or self-teaching to adapt to a variety of tasks — called desired results. We are exposed to AI every day. AI in its capacity has different maturity levels –

Algorithm — processing data through computations from the most simple to complex.

Neural Networks — mathematical constructs mimicking the structure of the human brain to summarize complex information into simple, tangible results and computing power to train these networks

Machine Learning — programming paradigm to learn from sufficient data to produce expected data and beyond

Explainability — techniques, tool, frameworks to help develop interpretable and inclusive models that are understood by human experts

Deep Learning — a subfield of ML, is an algorithm inspired by the structure and function of the brain, especially trained using real-world examples ( unstructured and unsupervised data).

Supervised learning, Unsupervised and Reinforcement Learning — 3 underlying categories of machine learning.

  1. Supervised learning — allows you to collect data or produce a data output from the previous experience.
  2. Unsupervised learning — while it is true to its literal meaning, one doesn’t need to supervise the model — it allows complex processing and brings out all kinds of unknown patterns in the data
  3. Reinforcement learning — semi-supervised model, build a mathematical framework to takes actions towards achieving an objective, receive feedback on those actions, and learns through trial and error.

Bias — Biases find their way into the AI systems we design and are used to make decisions by many (scary when it’s used by government and businesses). Bad data used to train AI can contain implicit racial, gender, or ideological biases. Bias in AI systems could erode trust between humans and machines that learn.

Backpropagation — in simple terms it is backward propagation of errors. It is a tool for improving the accuracy of predictions.

NLP (Natural Language Processing) — simply put, it gives the machines the ability to read, understand and derive meaning from human languages.


Neuro-linguistic programming is the method to use modeling not just to replicate an existing result in the desired form, but to use modeling to transform that problem into something significantly better behavior.

On the other hand, AI uses human patterns to build an artificial replication or imitation or and provide new models to build on value-producing opportunities; it is the process of moving from human response to machine response.

Human thinking process vs a computer programming model:

  • Capturing the imagination of the Client vs using Public data to build models
  • Brain vs Hardware
  • Mind vs Software
  • Human Language vs Programming Language
  • Use of human language and other channels of communication indicates the inner architecture of the mind and brain — same goes with the computer programming language
  • A solution-focused framework identifies goals and focuses on the resources a person shows in achieving them. Uses language as a primary tool to reach far and create an impact in other areas of client life — and in an AI framework uses interfaces like ML (machine learning) to build parallelization, obtain predictions to impact in desired areas of business or customers life.

An example to support the comparison of both these models:

A Stimulus is the starting point and the trigger to a response system — forming a ‘Sense’ — converting this to a ‘State’ — forming representations of previous experience — then defining the ‘Action’ — synthesize the action to get a ‘response’ and this will manifest into the real world.

[metrics (outcomes/ as is), knowledge (metadata), stimulus (data), state (customer / feature), response (action)]


Gaining clarity from a representation system alone can bring a significant shift for a NeuroLP client and similarly a clear representation system can bring the required differentiation to help us define our business AI models to its truest form. Identifying the stimulus, the state, then the corresponding representation will give our AI models to take better actions and to change/adapt to our customer behaviors. Thinking holistic and not just the models is what I would end this with.