Ep3: The Two Minds
why understanding is optional
In the previous episode we stated the problem — it lies with our capacity to understand knowledge. Casually taken for granted, it is, in fact, a trainable skill that often remains underdeveloped / underutilized precisely because we — our schools — don’t teach it explicitly. We jump straight to teaching the knowledge itself, before ensuring that students have learned to understand it.
This brings up an obvious next question: if understanding is indeed optional, how would a person who does not rely on it navigate through life? Well, we all have an alternative cognitive faculty that can pick up the slack by acting as an autopilot. And who programs this “autopilot”? Well, no one — it’s an AI and it programs itself.
By now, this should not come as a surprise that a significant part of our psyche could be a self-learning AI. Our brain is but a bunch of neural networks, and in recent years we saw their artificial varieties learning to drive, to walk, to talk like humans do. Even before ChatGPT there was Google’s AlphaZero, a neural network that, in a matter of hours, taught itself to become not just the strongest, but the greatest chess player of all time, human or otherwise. In this, the artificial neural networks showed that the most mysterious and hard to define human qualities — our creativity, our intuition, even our sense of beauty — can be attributed to a powerful AI “hiding” in the least accessible, perhaps, part of our Reality — in our subconsciousness.
Now, for the longest time we knew that something like this must be going on — something that would explain the apparent duality of our nature. For example, you might have seen this picture before:
An allegory thought up by Sigmund Freud, the iceberg represents the two sides of the human psyche. The tip above the water is our conscious mind, the part that we perceive as ourselves — Freud named it “ego”, a derivative from the German word for “I”. The underwater part — “id”, from the German word for “it” — is a neural network supercomputer, the AI side of us. And that thing is sure huge. It has enormous computational power and it works non-stop. Its purpose it to learn from our experience, distilling it to habits or automatic behavior, like walking,¹ and to ideas — “simple” ideas, as English philosopher John Locke would call them (as opposed to the “complex” ideas of the conscious mind).² Being a supercomputer, our subconscious AI can also notice much more details about our environment than we could hope to register consciously.
The key insight here is that everything that the AI does, all that information gathering and processing happens under the radar of our conscious awareness. We have zero visibility into the working of its machinery — we only become aware of the result, the bottom line, and only when the AI decides to inform us about its finding. This it can go about in a few ways, which to us all look like magic — coming out of nowhere or, perhaps, coming from outside, from the Universe itself.
For example, one way the AI communicates to us is by making us feel emotions — or that proverbial “gut feeling”. Or it can make us “feel” the vibe of the room, or the other person’s “energy”. It can even show us their “aura” (frequently depicted in religious iconography as a halo). All this creates an augmented reality — akin to virtual reality glasses — to help us navigate the real world out there.³
We’ll leave exploring the AI in better details for future episodes. For now, it’s suffice to say that the thing is immensely powerful and can do a reasonable job of an autopilot, guiding us through life on its own. All we have to do is to react emotionally or to keep “winging it”, taking whatever feels like the best course of action.
And don’t get me wrong — it’s not a bad thing per se to rely on intuition. In fact, it is necessary simply because we don’t have the luxury of thinking through every step in real time. What is less than ideal, however, is leaving it all to the autopilot, abandoning altogether our responsibility to understand and use that understanding to train the autopilot’s AI.
Every time we try to think through what went wrong and how we could have done better, every time we take effort to override an emotional response, instead of acting on it, we give the AI something to learn from — and rest assured that it is always watching, always trying to learn as much as it can from us, from our actions, as well as from other people and the environment.
Without our active participation, without our training, the AI autopilot would still drive us somewhere. That “somewhere”, however, could be far from where we want to be. That’s how we ended up, first, voting for Hitler, by millions, and then dying for him. That’s how we end up making other costly mistakes — as individuals, in our personal lives, or as society as a whole. Which is to say that the lack of understanding does come back to bite us — even if not often enough or bad enough to turn everything into a complete shitshow.⁴
Wrapping it up for this piece of the puzzle — we have two minds, and they are meant to work as a team, complementing each other’s strengths and weaknesses. The fast subconscious AI is in the driver’s seat, its job is to keep our “car” on the road, from falling into a ditch. The job of the slow and deliberate conscious mind is to piece together a map of the Realty and to use this map to navigate. To know, eventually, where we are, where we want to be, and how to get there.
The “map” is a metaphor for understanding. Now, how do we actually do it, how do we understand the world and ourselves — this will be the focus of the next episode. Stay tuned!
¹ We don’t think about how we walk — how we contract and relax dozens of individual muscles with perfect timing and precision to maintain the balance. We don’t need to because the AI on our subconsciousness has long learned to do it for us. In other words, we have learned to walk automatically — “on autopilot”. Just like later in life we would learn to drive a car and do many other things on autopilot.
² An example of a simple idea is your idea of a chair, or that of a woman. They are “simple” because unexplainable. This might come as a surprise — you’d think it should be easy to explain what a chair is… until you actually try :) In reality, what you will be explaining is your complex idea of it, your understanding of what a chair is. Ironically, complex ideas tend to be more rigid and less rich, less nuanced than their simple counterparts (because the latter reflect the wealth of your life experience). Of course, all that is just as true for your idea of a woman.
Speaking of which — this duality also explains why people find it so uncomfortable when they feel that their simple ideas are challenged. Such a challenge puts in doubt the basis of those ideas — which, again, is the person’s life experience. It’s like telling them that they didn’t really exist.
Finally, some simple ideas — like your idea of the color red — are truly unexplainable because they don’t have their complex counterparts. To sum it up, complex ideas represent knowledge while simple ideas represent experience (there is this brilliant thought experiment about Mary the scientist that illustrates the difference between the two).
³ It can also be something trivial — say, you walked in a room and sat on a chair. Now, wait a second… how did you know that that thing was a chair?
Exactly — “I know a chair when we see one”, and that’s all that we can divulge on the subject. What makes this “knowledge” possible, however, is all the work that the AI does under the hood — snapping images from your retina, running them through the filters of your simple ideas (like, yes, our idea of a chair) to identify ways of breaking the scene you look at into its components. Then it would proceed to reconstruct a 3D model of the room from those components — just to make sure that everything fits and the result does indeed look like a room with a chair — otherwise you wouldn’t feel so cavalier about slamming your butt in there, would you :) Most of the time, however, everything would check out — and that’s how we would “simply know”, even though, again, it wasn’t “we” who figured it out, nor was there anything simple about it.
⁴ In fact, when we see the big picture, it is hard not to sense the presence of something out there, something that keeps humanity in this precarious balance — not letting things deteriorate till we won’t have the luxury to be concerned with anything beyond immediate survival; and yet keeping us from getting too comfortable, as if to remind us that we still haven’t done our homework. This, however, is definitely a topic for another episode :)