On the Nature of Human Intelligence

Humans

It’s easy to transfer information. I’ve just did that. It is much more difficult to convey understanding between humans. But why?

When we study, we don’t just copy-paste information to our brain as if it is a computer’s memory unit. We have to employ techniques like spaced repetition, we also have to relate new pieces of data to something we already know (because associations make it easier to remember).

Once someone is taught how to, say, solve a particular type of math problems, the person still fails from time to time. When you learn how your friend looks, you can still occasionally mistake someone else for that person.

Why do we make such errors? The nature of our learning is building particular patterns in our neural network.

As we’ve recalled, building those patterns takes time and is error-prone.
Both could be evolutionary disadvantages. How come the survived species use this approach instead of the alternative?

Pattern-matching might take time to build patterns, but once built those patterns work fast. In a split second you might recognize a sign of a predator nearby.

In terms of evolution, staying alive and procreating has apparently been more important than reasoning.

Reasoning

Reasoning, on the contrary, always takes time to come to a particular result, deriving a chain of conclusions.

Evidently the world is too complex for us to derive necessary conclusions in time.

Consider the history of scientific advancements. There are lots of examples where a theorem or a hypothesis is stated long before the proof was made. There is no guaranteed algorithm to arrive at a new discovery; it is usually an application of one field’s (quite abstract) pattern to another field.

Intelligence

Suppose you try to replicate a horse to provide yourself an artificial means of transport. How many iterations would it take you to arrive at the concept of a wheel? Horse’s legs are nowhere near that. Its digestive system has no resemblance to the internal combustion engine. Maybe at some point you’ll have your synthetic horse, but it’s still not a car.

Throughout the history of building artificial intelligence, a few tasks have been suggested to test the AI agents against. Indeed, at first it seems as if the task requires intelligence to be solved. But that illusion dissolves once you’ve got the algorithm working. It’s called AI effect.

These observations lead me to a counter-intuitive conclusion. And yet it is the one that doesn’t contradict the observable facts.

Humans are not intelligent.

We are pattern matching machines. That’s why replicating our brains to the best extent currently possible only gives us pattern matching algorithms.

That explains our self-contradicting behavior (different patterns may fire up simultaneously). That explains the power of analogies and examples in teaching, negotiations and communication in general. That’s why humans can disagree while observing the same evidence. That’s why the good stories have the same structure. That’s why discoveries are made by accident. That’s why you forget your great ideas unless you write them down — yes, you’ve got the same experience, the same facts, skills, everything, but you didn’t derive the conclusion, you accidentally matched particular patterns.

Take a look at any futuristic painting or a sci-fi story of the past. They all have one thing in common — linear extrapolation by a couple of dots. That’s exactly how some machine learning models work. This approach always fails when you try to predict a future state of a nonlinear dynamic system (next month’s weather or stock prices, for example).

Instead of true reasoning (yet to be invented) we merely apply patterns, that sometimes resemble it (other times we make mistakes).

A reasoning agent will be able to apply an algorithm to make scientific advances. Contrast that with our brute force approach (since we don’t know which pattern to apply each time).

The major downside of this hypothesis is that it doesn’t explain consciousness.
Maybe we need a real intelligence to solve that.