A unique thing that a future AI will most likely do is to ‘decide’ for itself, and to ‘learn’ from the consequences of it’s decision. The process has already begun. So what different will it be from humans, if an AI can ‘decide’ and ‘learn’ from available inputs and memory? Should we still call their cognitive algorithm ‘Artificial’ intelligence, or simply “Intelligence” like we call our own?
But first, let’s have a look at how we think as individuals. Suppose we have a notion in our brain as a result of a perception of some stimuli that leads to it, such as visual ques or auditory ques of someone’s speech. Subsequently, when the turn comes for us to either act or form an opinion or a stance about it, we try to rationalize it, justify it or either scrutinize it based on our memory and knowledge (or beliefs) before coming to a calculated or an emotional conclusion.
Even lower mammals think. They rationalize their actions only through a much simpler algorithm: Heuristics. Putting it simply, heuristics can be said to be prompt justifications based on emotions such as fear, hunger, attachment, lust and so on or based on readily available impromptu memory. Humans probably just have an added dimension to this algorithm, with the ability to separate emotions from pure cognition most of the time. It’s evident that such a quality is not inherent, but rather learned. Nonetheless, it seems that only humans have the capacity to learn to such an extent so as to be able to travel to outer space or to be able to harvest the energy of an atom.
Pure cognition helps provide an additional, learned dimension to the algorithm such that we have been able to cooperate with other members of our species, learn written languages (spoken language is innate but written is always learned, no wonder the focus on literacy rates around the globe), do advanced mathematics, learn to cook food with style, do art, train to understand science, build aircraft and so on. We are just like other mammals who can be said to think, but with a much more complex algorithm. Say, when compared to Rhesus monkeys who also do think, live in social groups and have opposable thumbs like ours, but do not have the capacity to learn as significantly as we do.
In that sense, dogs think as well. And most of us must be familiar with their behavior. Throw a ball for them to fetch, they’ll retrieve it back. Hide the ball again, they will search for it and sniff around your body because they have the memory that you last had it. If you show your empty hands to their eager eyes, they’ll then start to sniff around the lawn closest to where the ball had landed previously. This is basic mammalian algorithm. Food is here – eat food. Food isn’t here – search for food. Food was here, now no food – search where food was last seen. Then repeat this cycle, until new food is found. Forget about the old food.
We do this too. But we have a higher level of problem solving ability, because our algorithm is much more complex as I’ve mentioned above. In the dog’s algorithm above, it tends to forget about the food of interest if a new food is found. But we humans on the other hand, tend to keep thinking about the object we have lost in the distant past, the food we were denied and keep looking for innovative ways to either search for the same food, or to be wary in the future so as to not commit the mistake of losing the food or object in the same way or to eventually learn to assure plenty of food by learning how to farm! You may have noticed here, that unlike our canine friends, we plan for the future and think ahead. We see the bigger picture, when a dolphin or a chimpanzee simply cannot. Their solipsism is limited to survival or hereditary kinship, while our solipsism can go beyond survival to wonder about the diseases that kill us or even so far ahead to wonder about the stars and planets that have no connection with our immediate survival; all when simultaneously co-operating with other individuals of our species who aren’t even siblings in any way.
But a crucial similarity between monkey and human cognition, is the process of decision making and learning through experience (detailed memory), regardless of the vast differences in cognitive capacities. What a machine could not do till now, even a cognitively less developed dog could do. But that is, interestingly, changing.
Most machines and software in use today, can think and make decisions. But their decisions are seemingly redundant. They don’t learn much. However, a new approach to programming and computer science these days is calling in a new era of ‘machine learning’. There are already ‘bots’ circulating throughout the internet which alter their codes to suit their website or server’s agenda. And they learn from previous failures to be able to improve in the future. Some computer viruses may be doing that as well. Notable projects include Google’s Deep mind, Apple’s Siri and Microsoft’s Cortana; are all being developed after every launch so that they increasingly cater to the customers by learning about their relevant behavior patterns and making accurate decisions based on it. They are only getting better by the year.
I remember Noam Chomsky, who is foremost a Cognitive scientist, replying to a journalist who asked him “Can machines think?” by saying something like “I think to ask ‘can machines (computers) think’ is like asking ‘can submarines swim?’. Computers were designed to think.”
According to him, since computers were designed to think, and to calculate one has to think, they’ve always been thinking. It’s just that they couldn’t decide on their own or couldn’t learn on their own without some kind of human input. We witnessed the attainment of this decision aspect in machines which spearheaded us to a digital age from an analog one. The next step is that we may witness machines daft with the learning aspects as well. These two faculties, which we’ve only been accustomed to witnessing in humans and lower animals, we will possibly see it unfold in inorganic machines who can actually think just like us and often become indistinguishable from (or even superior to) humans. And this thought may be creepy for some of us because after this, our inventions will have taken over the very definition of our species, Sapiens, itself (Sapiens = Wise/Thinking). If so, we may no longer have the sweet comfort of becoming the only intelligent entity on the planet.
Take this idea for a moment. By just looking at a person talking to you, can you tell for sure whether they are either thinking like you are or simply are just involved in the conversation? Can you differentiate their affect (emotions) from their cognition (thoughts) right there and then? Can you tell whether their thoughts are spontaneous or pre-programmed? I guess not. And we certainly couldn’t do so, if for example, humanoids passed the turning test. Now who is a machine and who is not? Are we only calling ourselves machines just because we are organic? Why are our cognitive algorithms called ‘thinking’ but not that of machines because of this organic bias? Are we not organic ‘survival machines’ as well?
Questions may also arise as to whether or not to call them ‘artificial’ anymore because of their ability to decide and learn on their own. For a robot that passes all levels of the Turing test (text, auditory, audio-visual), how can we tell by simple interaction that they are thinking or not thinking? Can you tell then, for certain, whether a pedestrian you collided with while commuting, was a humanoid or a human? Other issues may come about after that, such as rights and cognitive egalitarianism (equal rights for all intelligent entities and such) but it’s beyond the scope of this essay.
So the whole point behind this write up was to highlight the idea, that before asking questions like “can machines think?” it may give us a better insight if we also entertain the question of “can we think?”. It’s an interesting matter to ponder about. Because seriously, can we really think or are we under the illusion of thinking? Or are we merely processing information and memory to form a conclusion or a specific reaction that gives us this illusion that we are actually thinking?