Futurism, Personal Opinion, Philosophy, Technology

Can machines think? 

A unique thing that a future AI will most likely do is to ‘decide’ for itself, and to ‘learn’ from the consequences of it’s decision. The process has already begun. So what different will it be from humans, if an AI can ‘decide’ and ‘learn’ from available inputs and memory? Should we still call their cognitive algorithm ‘Artificial’ intelligence, or simply “Intelligence” like we call our own?

But first, let’s have a look at how we think as individuals. Suppose we have a notion in our brain as a result of a perception of some stimuli that leads to it, such as visual ques or auditory ques of someone’s speech.  Subsequently, when the turn comes for us to either act or form an opinion or a stance about it, we try to rationalize it, justify it or either scrutinize it based on our memory and knowledge (or beliefs) before coming to a calculated or an emotional conclusion.

ex_machina_ver4
If you haven’t watched this movie, leave everything and watch it now!

Even lower mammals think. They rationalize their actions only through a much simpler algorithm: Heuristics. Putting it simply, heuristics can be said to be prompt justifications based on emotions such as fear, hunger, attachment, lust and so on or based on readily available impromptu memory. Humans probably just have an added dimension to this algorithm, with the ability to separate emotions from pure cognition most of the time. It’s evident that such a quality is not inherent, but rather learned. Nonetheless, it seems that only humans have the capacity to learn to such an extent so as to be able to travel to outer space or to be able to harvest the energy of an atom.

Pure cognition helps provide an additional, learned dimension to the algorithm such that we have been able to cooperate with other members of our species, learn written languages (spoken language is innate but written is always learned, no wonder the focus on literacy rates around the globe), do advanced mathematics, learn to cook food with style, do art, train to understand science, build aircraft and so on. We are just like other mammals who can be said to think, but with a much more complex algorithm. Say, when compared to Rhesus monkeys who also do think, live in social groups and have opposable thumbs like ours, but do not have the capacity to learn as significantly as we do.

In that sense, dogs think as well. And most of us must be familiar with their behavior. Throw a ball for them to fetch, they’ll retrieve it back. Hide the ball again, they will search for it and sniff around your body because they have the memory that you last had it. If you show your empty hands to their eager eyes, they’ll then start to sniff around the lawn closest to where the ball had landed previously. This is basic mammalian algorithm. Food is here – eat food. Food isn’t here – search for food. Food was here, now no food – search where food was last seen. Then repeat this cycle, until new food is found. Forget about the old food.

We do this too. But we have a higher level of problem solving ability, because our algorithm is much more complex as I’ve mentioned above. In the dog’s algorithm above, it tends to forget about the food of interest if a new food is found. But we humans on the other hand, tend to keep thinking about the object we have lost in the distant past, the food we were denied and keep looking for innovative ways to either search for the same food, or to be wary in the future so as to not commit the mistake of losing the food or object in the same way or to eventually learn to assure plenty of food by learning how to farm! You may have noticed here, that unlike our canine friends, we plan for the future and think ahead. We see the bigger picture, when a dolphin or a chimpanzee simply cannot. Their solipsism is limited to survival or hereditary kinship, while our solipsism can go beyond survival to wonder about the diseases that kill us or even so far ahead to wonder about the stars and planets that have no connection with our immediate survival; all when simultaneously co-operating with other individuals of our species who aren’t even siblings in any way.

her-movie-review
Another great movie that portrays a successful outcome of a textual-auditory Turing test. A must watch.

But a crucial similarity between monkey and human cognition, is the process of decision making and learning through experience (detailed memory), regardless of the vast differences in cognitive capacities. What a machine could not do till now, even a cognitively less developed dog could do. But that is, interestingly, changing.

Most machines and software in use today, can think and make decisions. But their decisions are seemingly redundant. They don’t learn much. However, a new approach to programming and computer science these days is calling in a new era of ‘machine learning’. There are already ‘bots’ circulating throughout the internet which alter their codes to suit their website or server’s agenda. And they learn from previous failures to be able to improve in the future. Some computer viruses may be doing that as well. Notable projects include Google’s Deep mind, Apple’s Siri and Microsoft’s Cortana; are all being developed after every launch so that they increasingly cater to the customers by learning about their relevant behavior patterns and making accurate decisions based on it. They are only getting better by the year.

I remember Noam Chomsky, who is foremost a Cognitive scientist, replying to a journalist who asked him “Can machines think?” by saying something like “I think to ask ‘can machines (computers) think’ is like asking ‘can submarines swim?’. Computers were designed to think.”

According to him, since computers were designed to think, and to calculate one has to think, they’ve always been thinking. It’s just that they couldn’t decide on their own or couldn’t learn on their own without some kind of human input. We witnessed the attainment of this decision aspect in machines which spearheaded us to a digital age from an analog one. The next step is that we may witness machines daft with the learning aspects as well. These two faculties, which we’ve only been accustomed to witnessing in humans and lower animals, we will possibly see it unfold in inorganic machines who can actually think just like us and often become indistinguishable from (or even superior to) humans. And this thought may be creepy for some of us because after this, our inventions will have taken over the very definition of our species, Sapiens, itself (Sapiens = Wise/Thinking). If so, we may no longer have the sweet comfort of becoming the only intelligent entity on the planet.

Take this idea for a moment. By just looking at a person talking to you, can you tell for sure whether they are either thinking like you are or simply are just involved in the conversation? Can you differentiate their affect (emotions) from their cognition (thoughts) right there and then? Can you tell whether their thoughts are spontaneous or pre-programmed? I guess not. And we certainly couldn’t do so, if for example, humanoids passed the turning test. Now who is a machine and who is not? Are we only calling ourselves machines just because we are organic? Why are our cognitive algorithms called ‘thinking’ but not that of machines because of this organic bias? Are we not organic ‘survival machines’ as well?

Questions may also arise as to whether or not to call them ‘artificial’ anymore because of their ability to decide and learn on their own. For a robot that passes all levels of the Turing test (text, auditory, audio-visual), how can we tell by simple interaction that they are thinking or not thinking? Can you tell then, for certain, whether a pedestrian you collided with while commuting, was a humanoid or a human? Other issues may come about after that, such as rights and cognitive egalitarianism (equal rights for all intelligent entities and such) but it’s beyond the scope of this essay.

So the whole point behind this write up was to highlight the idea, that before asking questions like “can machines think?” it may give us a better insight if we also entertain the question of “can we think?”. It’s an interesting matter to ponder about. Because seriously, can we really think or are we under the illusion of thinking? Or are we merely processing information and memory to form a conclusion or a specific reaction that gives us this illusion that we are actually thinking?

Futurism, Personal Opinion, Philosophy, Science

Death of the Biosphere? 

(Transformation of the Biosphere)

Why do we consider inorganic or organic materials made by humans as “unnatural”, when we do not designate the same term to mountains created by tectonic collisions or to elegant rocks eroded for ages by river water nor towards parasites such as Toxoplasma infecting the neurons of lower mammals and altering their behavior?

A typical answer to this question is that what we consider to be “unnatural” could be those entities formed as a result of a ‘sentient intention’ (a sentient being doing something intentionally). But if so, then even a courting swallow is sentient enough to manipulate twigs and leaves to build itself a nest, and not just that, but also decorate it with the intention of attracting a mate. But is a swallow nest “unnatural” to us? If say, the swallows were as cognitively enhanced as we are, from within their solipsism, would their nest be “unnatural” to them then? So perhaps the only option left to answer the question would be from an anthropocentric (human-centric) viewpoint?

If we are evidently products of a natural process, then isn’t it logical to also assume that our tool-making and tampering of ecosystems is most likely a subset of the natural process (i.e the universe itself)? Unless of course, a third omnipotent entity might be puppeteering our actions; of which there seems to be no justification whatsoever. The former notion fits more appropriately, when we also learn the fact that free-will most probably doesn’t exist and that we all behave and make choices as a result of our conditioned heuristics (if not trained to do otherwise). That is saying, in simpler terms, that our brains and with it our personalities and preferences are shaped by the random events of our environment.

For instance think about this; take a tissue from a certain person, Shyam, and clone him. Will the clone bear exactly the same personality as Shyam himself? Will the clone become the same Shyam or a different Shyam? Same can be observed from the differences found in personalities between two monozygotic (identical) twins with exactly the same genetic makeup. How is this, again, not part of nature? And how, also these conditioned individuals’ future decisions, moral codes, ethics, judgement, emotions not be part of nature? If humans decide to construct a tree-house, is it natural or unnatural?

Man man - Natural things
We’ve been taught from a small age about the dichotomy of Man-made and Natural. But I finally think we are mature enough to question this idea. If Man himself is natural, how are man-made things technically ‘unnatural’? 

It is perhaps possible that we may be viewing matters which concern the milieu in which we exist, frequently from only within our solipsism. Because in the blind eyes of reality, a plastic water bottle is as much a component of nature as the keratinous fur over a wild bear. They all come from molecules found within the universe, made from the elements of this universe, and released into the universe itself. It just seems that from within our own solipsism, we perceive a plastic water bottle as “unnatural” and grizzly bear fur as “natural”.

Another example is that we consider paper plates “unnatural” yet we do not often say the same for make-shift banana leave-plates used by ethnicities in Kerala. When both utensils are in fact used for holding our food, why is one considered natural yet the other unnatural? When bacteria protect themselves from viruses with Cas-9 immune system that is considered natural, and when we inject vaccines to survive from the harms of measles or polio viruses, it is somehow unnatural. But again, are they?

Moving on to the bigger picture, why do we consider even human-caused near-extinction phenomenons such as climate change to be “unnatural” when we considered the Ordovician-Silurian extinction (which wiped out more than 85% of life on the planet) some 430 odd million years ago as simply ‘natural’? If the extinction of the Trilobites or the Dinosaurs was natural, why isn’t the possible extinction at the end of this human era considered the same? It most obviously has something to do with our survival. Were it not about our survival, would we still look at climate change with grave eyes as many do today? (Even so, many others still deny climate change despite the fact that it may spell a radical change in human civilization or even spell the end).

Furthermore, would bringing back species of flora and fauna extinct due to human-caused events and reintroducing them to an ecosystem that has moved on without them, be considered natural or unnatural? Are we merely trying to bring back the Tasmanian tiger or the Dodo for our own solipsistic satisfaction or as a moral repent for our “unnatural” manipulation of our surroundings? Or simply, is it because we can do it and we want to see what it’s like to bring back that part of the biosphere which perished from existence for quite a while? In our general moral code, natural is generally considered ‘good’ and anything made by humans that is “unnatural” seems bad. We live with this dichotomy, probably thanks to our slowly evolving psyche which is clearly outpaced by exponential technological and scientific innovations that most of us simply cannot perceive from beyond our anthropocentric views.

quote-by-challenging-anthropocentricism-and-temporal-provincialism-science-fiction-throws-alvin-toffler-114-4-0413
Alvin Toffler is an American Writer and Futurist

The whole point of my series of questioning is not about the call for withdrawal of ongoing conservation efforts and the global efforts to try and curb the harshest effects of climate change. This essay is not about addressing such issues, but rather something else. From our species’ perspective going-green seems utilitarian and is, but an effort, to survive and grow. It is an effort of mine, to point out the flaws and paradoxes in just the rhetoric surrounding our existence in the nature around us, our survival and conservation efforts. An attempt to just try to present a supra-human insight on the very idea of human progress itself. I simply question whether we should even need to use the natural-unnatural dichotomy in the first place?

While contemplating about such issues, I can’t help but remember George Carlin. Especially that joke where he said that our rhetoric of ‘saving the planet’ was so out of touch with reality because the planet will still be here for millions of years, it’s us humans who are fucked! So we better be saying ‘save our species and other species’ instead of ‘the planet’. Because in my opinion, as much as our manipulating of nature is for our benefit and survival, it is also a part of the process of nature (of course we are affecting other species and ecosystems because of it) but the planet will stand even if we perish as a species from the planet. Something else may replace us, but that will obviously and definitely not involve us.

My ultimate point is that we may be already at a turning point give or take a few decades or for backup, centuries. Perhaps 50 years or 200 years from now, but in our 70,000 years of walking upon this planet, we are closer to the turning point than ever before. That too, exponentially!

The turning point I am talking about is not that which only relates to us humans, but that which concerns the entire biosphere. There may be a multilateral effect pattern after this turning point (or rather turning ‘period’), but I’d specifically like to talk about only two that interest me the most.

The first is that projection which leads to our species’ extinction. As simple as that. Whether a failure to cope around climate change, or a deterrent-bypassing nuclear armageddon in the face of possible wars for basic resources like water and failure of nation states. Or simply an asteroid strike (one with diameter of just 300 meters could start extinction level chain reactions in the atmosphere if it were to strike our planet). Another stream of extinction could likely be extra-human. Like a superior intellectual uprising after the Technological singularity, i.e machines that outsmart us (Sky net from Terminator) who suddenly decide that they are better without us and treat us like how we treat other lesser intelligent beings like ants when they come in the way of cooking food or building houses. This is a possible projection but it is more dystopian and depressing and surely humans, as survival machines, may fight to prevent such from happening. Only time will tell.

The second projection could also be considered an extinction, but I for once would like to consider it as a ‘transformation’ of sort. The singularity will most probably happen in this one too, but rather than the complete loss of our species, we may transcend into the digital world or cognitively marry in part with the AI which compute more creatively than the smartest of us currently do (Cyborgs; this has already begun; how many of us can actually live smoothly without an essential machine such as our smartphone or PC?). This particular projection is, in my imagination, the one most compatible with the Kardashev scale of progression of civilizations. Since we are currently somewhere between type 0 and type 1, the Transcendent singularity may possibly lead us to type 1 where we can effectively and completely manipulate and control all of the earth’s energy sources, including the weather itself (or even beyond to type 2, where we may enclose the Sun within a Dyson-sphere to harvest its energy).

civtypes
The Kardashev Scale. Human civilization is currently between type 0 and 1. We are quite not masters of the planet………yet!

This transformation, may in part assimilate with the biosphere as we see it now, as what we call “artificial” selection or “unnatural” selection may be more dominant than “natural” selection itself. In simpler terms, literally “everything” could be now controlled by a certain class of sentient intelligent beings with precision. The dominant “force of nature” now most probably is not a series of random events but carefully thought up ones by thinking beings, for their own benefit. The previously fully organic biosphere may start to now intersperse inseparably with inorganic or semi-organic entities such as silicon or carbon bucky balls or even impalpable entities such as source codes and binary languages of programming rather than that system previously achieved only through the purely organic DNA. Now again, this might be a good time to interject my lingering question: Will this too be considered “unnatural” by talking-sentient beings like us? What aspect is not a part of nature and what aspect is a part, in this instance?

For those of you who are anime/pop culture fans, here’s an analogy with a thought experiment: Would the sentient Autobots and Decepticons from the Transformer universe be considered “unnatural” just because they are inorganic beings? Could they not be as much a part of the universe that they exist in as the Human characters that they befriend or wish to destroy respectively?

Transformers-The-Last-Knight-Trailer
An Autobot and a Human from ‘Transformers: Last Knight’

So I think in the end, the word “unnatural” fails outside of our human solipsism. Beyond us, it probably bears no logical significance. It might be a semantically useful tool in rhetoric and motivation or for individuals who think going out for a hike might make them “close to nature”, which also may assign a transient sense of meaning to their brisk lives. But they are indeed unaware that the metropolitan concrete apartment inside which they reside, is as much a part of the universe and nature as the rocky hills of a certain national park that they enjoy hiking upon. The only difference is that we shaped resources in a different way to build an apartment, and tectonic plates utilized the same resources in a different way to build the rocky hills. We do it with intention, the seismic movement of earth has no intentions. But eventually, we’re both accidents upon the planet.

In the end, “we are all stardust” to quote Carl Sagan. Such realizations and thoughts give us an immensely broader supra-human perspective of existence and reality. So it’s always worth asking such questions. Daily things that we do, and ideas that we ponder about. What do they mean to us and what do they mean without us?

In this context, I’d like to sign off with two open-ended questions: Are we nearing the ‘death’ (or transformation) of the biosphere? How do we justify environmental conservation efforts, if even human progress is technically ‘natural’?