In partnership with Techcast. For more information, sign up at https://BillHalal.com
This brief focuses on the other cognitive functions that should be included in the exploration of the role AI will continue to play as the technology expands; dreams, curiosity, logic, future framing. Some commentary might seem a bit too casual in assuming that AI will replace all human forms of intelligence (HI). A few sentiments also make the crucial point that AI and HI will merge, a common theme that is happening now.
Revised List of Cognitive Functions The list of cognitive functions has been condensed to make it more intuitive and manageable. We added the suggested functions and combined those that are similar to form small clusters of the following 9 functions:
1. Perception, Awareness Sensory experience through touch, sight, sound, smell, taste.
2. Learning, Memory Information, knowledge, or skill acquired through instruction or study.
3. Information, Knowledge, Understanding Information, knowledge, etc. processed, encoded, and stored for future action.
4. Decision, Logic A determination arrived at after consideration.
5. Emotion, Empathy Mental reaction of strong feelings: anger, fear, vicarious emotions of others.
6. Purpose, Will, Choice Ability to set a purpose and choose some action to attain it.
7. Values and Beliefs Ideas held in relative importance and considered true.
8. Imagination, Curiosity, Creativity, Intuition Novel ideas, and knowledge gained without sensory input.
9. Vision, Dreams, Peak Experience, Future Framing Guiding thought, altered state of consciousness formed without sensory input.
In this framework, an application (like a GPS navigation driving system) is formed by drawing on needed functions and integrating them into a workable AI system. A complete collection of such applications would make up General AI (GAI or AGI), an artificial equivalent of the entire human mind.
This list of cognitive functions may not be quite right, but that is a minor issue. This study is mainly interested in estimating the relative profiles of AI and HI and their integration, rather than accuracy.
Objective vs. Subjective Consciousness
There is a need to make the crucial distinction between objective and subjective forms of intelligence. Let us redefine the hierarchy of consciousness more precisely using the figure below to illustrate the differences between two general types of human thought.
The “objective” functions include perception, knowledge, decisions, and other forms of factual information. Whereas the “subjective” functions focus on tasks that are inherently personal, or what cognitive scientists call “qualia” – emotions, choice, beliefs, vision, and other ethereal functions. The subjective functions are also more powerful because they shape the objective level. That’s why religions and belief systems form the ideological foundation of societies and even scientific paradigms.
Consider how the functions of consciousness are drawn upon to manage some projects, such as using your car’s GPS navigation system. The GPS satellites provide the car’s location instead of you observing road signs. In other words, the AI in the GPS system has automated this task, as well as the location of roads and other knowledge, and stored it in the system’s memory. The system can then compare your location to your destination, learn how they differ, and make decisions that tell you what to do.
Although this illustrates how AI can automate the objective functions, it also illustrates that it cannot do the same for the subjective functions – it cannot choose a destination. The choice of where you wish to go is inherently subjective; it is an act of purpose and will. Therein lies the crucial distinction between what AI can do and what it cannot do. In short, an AI simulation is not the same as life. We may not understand what is unique about HI or the source of its special power. But there seems to be an important difference between AI and HI.
It is precisely these subjective aspects of consciousness that are rising in importance. The most obvious example is today’s “post-factual” wave of those who do not believe in evolution, climate change, vaccination, and other forms of established science. This is occurring because smartphones and social media have flooded us with such overwhelming data that we can’t sort out the truth from endless claims of fake news, conspiracy theories, and other forms of disinformation. The result is that people increasingly rely on their subjective values and beliefs to find a way through a sea of nonsense. And as AI automates the objective work, humans are moving further into the subjective realm. In fact, the US and other advanced nations are passing beyond the Knowledge Age and entering an Age of Consciousness even now, though we may not like its current form. For instance, Trump gains his power by being a master at shaping consciousness.
The rise of subjective consciousness is also driven by global threats like pandemics, climate change, economic collapse, mass automation of jobs, gross inequality, and other crises we have called the Global MegaCrisis. To state the obvious, these are existential challenges that are not going to be resolved by AI alone. The MegaCrisis will require many decades of hard, creative human work to reconcile the conflicting interests of 8 billion people if it can be done at all.
In short, the future will certainly benefit from having powerful forms of AI that automate objective work, and it may simulate subjective functions for various purposes. But the bulk of the labor force is very likely to struggle to manage a world of such complex subjectivity that only HI will be up to the challenge. AI may never be able to provide those subtle but crucial subjective inputs that determine what should be done, how it is to be done, and ensure it is done properly. This should be even more important if we hope to control AI and keep it safe.
The limits to AI were stressed by Isaac Asimov’s rules for robots, which were pretty clear that humans should remain in charge by providing these subjective functions. In the well-ordered world of Asimov, robots would be safe as long as they are not given the freedom of agency. They are not to act like people.
Now let’s examine the evidence. Some HI is being done by machines now. For instance, see this example showing that the third generation of AI is coming that goes beyond deep learning to simulate human empathy. In this example, an avatar listens to a soldier talk of his PTSD experiences and coaches him into a resolution of the trauma. Obviously, the applications could be huge, from automated psychotherapy to teaching to virtual sex.
We often ask audiences if they think there is a substantial difference between AI and HI. A few brave individuals usually say “No, there is no difference,” but the vast majority (90% or more) insist there is a substantial difference. They may not be able to put their finger on it, but it seems intuitively obvious to most people that humans are unique. It may be that we are tuned into the higher wisdom of the cosmos in some Jungian way. Of course, we could be proven wrong as AI matures. That is the nature of this great experiment now underway as science advances. This little study is our attempt to anticipate the outcome.
Another good example is the study on “AI and Future Work.” We found that the threat of mass unemployment due to automation is likely to be resolved by pioneering a new frontier of “creative work” that can’t be done by intelligent machines.
This raises the profound question – “How much of HI is likely to be automated, and how much will HI continue to do? Even if AI can simulate some aspect of human consciousness, what does that mean really? Would it be the same as what people do? Or would it be just a rough cut at the real thing? How would AI and HI work together? Controlled? These are difficult questions that bear on the future relationship between AI and HI. Let’s see what we can learn.