Personal Opinion, Philosophy, Technology

A vision for my city

I’m not an engineer by profession, but I find cities of any kind very fascinating. And exploring videos and maps of cities and playing city-building simulation games used to be my favorite past-time (before I was caught up in some career work).

I’m a fan, especially of cities with less horizontal sprawl and more vertical elevations – such cities take up less space, cut transportation time significantly, and save energy and water distribution costs drastically. And of-course: I particularly have a crush on well-designed cities that have an organized mass-transit system, my personal favorites: the city of Kobe in Japan (Trains everywhere!!!!) and the city of Medellin in Columbia (Cable cars!!!!).

Cities are multi-dimensional, that is – despite them being established mostly in a top-down manner (like planned cities of Barcelona or Manchester or Seoul or Singapore), they can also be improved via a bottom-up approach (Rio de Janerio, Brasil or Medellin).

By bottom-up approach I mean – to improve, with minimum demolition, cities that are already overcrowded and seemingly unmanageable; and to give them their own unique identity along with improving its citizens’ overall well-being. The city of Rio and Medellin have done it in a uniquely Latin-American way. They improved upon their slums (Favelas) that were already there, without destroying them (now some of them are crime-free and pose as tourist attractions), they regulated housing in other areas and made it affordable so newer immigrants need not further expand the slums, and they established a mass transit system unique to the topography that could carry their citizens efficiently, for low cost, connecting the thriving city-center to the developing Favelas on the hills: via spacious cable cars!!!

When I look at my own city of Kathmandu valley, I see no alternative to the bottom-up approach. We simply cannot afford the top-down approach anymore – there are too many heritage sites and monuments and people live everywhere in a haphazard manner. Everything is jam-packed and we surely don’t want more dust. We simply cannot afford to press the ‘reset’ button for this city. Innovation is a key figure for bottom-up approaches and we all know it can bring us around traditional problems just as in Latin America – without the need for destroying anything of significant value.

It’s a shame that in my city they had to destroy historical landmarks such as “Sohrakhutte” just for the simple task of expanding the road by a few meters. There is also another talk of building satellite-cities outside ring-roads by destroying historical Newari villages and towns. These are examples of a top-down approach – where an authority figure (government or a company) have complete authority and control in development projects, with little regard for the citizens themselves. Top-down approaches are more suited for building newer cities such as Navi-Mumbai, or Singapore or Songdo-city outside Seoul; all of them being built from scratch out of land reclamation or on top of wastelands. In older cities with historical significance or over-crowding, only the bottom-up approach makes sense. And let’s be honest, regardless of the federalization the the country, Kathmandu valley will still have significant influx of people for many years into the future.

So one example of a bottom-up approach to improve upon the aesthetics of Kathmandu city could be by completely doing away with cables or wires (many cities around the world have shifted to wireless or CDMA and we can get rid of wires in due time as well). Just getting rid of obsolete data transfer systems would free up much needed spaces, exception only being for electric cables. Another method would be to employ smart transit systems requiring minimum infrastructure – such as relatively cheap-to-build (compared to underground metro) yet large enough cable-cars to connect commuters from dense areas like Sitapaila or Budhanilkantha to somewhere near Line-chaur or Ratna-Park. To aid rush-hour traffic, we can instead turn towards local shareable vehicle technologies (Tootle is one example) which would allot an idle vehicle not just for one person – but for any going in the same direction for a small price, as long as there’s space inside. We don’t always have to widen the roads, as we can learn from old cities in Europe – we can restrict vehicles to promote alternate methods of transport within set areas.

Even if we just improve upon the sidewalks and crossings, many people would opt to walk short-distances instead of using vehicles (which we currently do with micro-buses, even to travel a distance of only a kilometer). We can replace the clutter of small-size buses or micro-buses with bigger scheduled buses in a way Sajha yatayat are doing. Big buses free up traffic by fitting in more people per square meter on the road. Instead of allotting massive budget and energy for constructing underground metro systems (which would also require a lot of demolition) we could opt for skylines such as heavy-capacity monorail systems which occupy less space and can cut across dense areas of the city with minimal invasion (Like those being modeled or tested in Mumbai, Bangalore and Guanzhou). And these are to be connected with each other – such that a person living in Koteshwor could ride a monorail upto Lagankhel and then switch to either a bus to go into the city or a rope-way to get to Lamatar. We need loops of transit systems. And all these can be approached only by means of a considerate bottom-up approach, not really top-down. A bottom-up approach also saves us more money and time for construction compared to top-down ones. The philosophy should be to turn Kathmandu, not into New York or Osaka (because we are never going to achieve that in a reasonable way), but into a livable, more efficient Kathmandu.

Of course these are just my amateurish assumptions and it will be harder to implement these changes in practise and people do exist in our country who know more about this than I do. But we just ought encourage ourselves to think outside the box once in a while. We could also benefit from sending our technical people to train in Latin america or china where innovative concepts for both new and old cities are being explored on a regular basis. We need to learn from the people who got it right, so that like them, we can also pull our city out of the dust and into the 21st century.

And last but not the least, I think we need to participate ourselves, as citizens, for the betterment of this promising city.

Personal Opinion, Philosophy

On ends and their means…

“If two people arrive at the same conclusion from two different sources of knowledge, would it be necessary to differentiate or discern the validity of these sources?”

For example, one person becomes a vegetarian after reading Buddhist scriptures and another becomes a vegetarian after thinking through utilitarian ethics and the rationality and morality behind suffering of animals for dietary gains, would it still warrant skeptics to be skeptical of Buddhist values or vice versa? Would ends justify the means?

To those who say we shouldn’t and that all schools of thoughts should be given equal importance in terms of values and outcomes, what if in the next few lines of some Buddhist scriptures it is mentioned that only men can attain enlightenment and not women because they are not higher up the spiritual hierarchy? (This is just an example I made up, but many, not all, traditional Buddhist scriptures do limit women’s enlightenment status and proclaim that women can never truly become the enlightened Chakravartin or the Buddha).

I would assume that defendants would come to the rescue of Buddhism by saying those are “not true words of the Buddha but later interpretations by his numerous disciples through the ages”. A perfect “no true Scotsman fallacy”. And others may add that “we ought to accept the good values and reject the old and redundant ones”. I would perfectly agree with the latter statement of defense, but a question would definitely come to my mind: If we were indeed to cherry-pick what we deem ‘good’ and filter out what we deem ‘bad’ from established documents of an idea, what is the point of accepting or adopting the identity of the whole doctrine itself? Haven’t you clearly contradicted from the original doctrine yourself? Are you being unaware of your double-thinking? Are you not uncomfortable having to live with the evident cognitive dissonance that you’re displaying?

This was an effort to highlight one fundamental problem with eclecticism or syncretism  that are prevalent in the current globalized world, thanks to John Lennon’s Imagine and the 60’s hippy-movement. In short, these are schools of thought that equate every human idea or philosophy to be of similar value and importance. But the fact of the matter is that, this cannot be consistently true. In that sense, can we rightly equate the core tenets of Nazism to those of the Quakers? Can we equate the fundamental principles of Islam to those of Buddhism? Can we equate Advait Vedanta to Hindutva? Can we equate superstition to science? Can we equate the values of Democracy to Maoism or Freedom of speech to Fascism? No we cannot!

For a careful thinker, there are flawed ideas and there are sound and valid ones. The conclusions derived by the latter of the sort follow through cogent and valid premises themselves. All ideas cannot be given equal weight, even if we do consider going through them to broaden our perspectives. (You cannot logically try to match the core ideas of Adolf Hitler’s Mein Kampf to those of John Stuart Mill’s On Liberty). This much should be well evident and well thought, and not to be confused upon.

It is important to give every idea a chance, but more importantly, there is a dire necessity (and perhaps also a great responsibility) these days for us to be able to distinguish between good ideas and bad ones. We definitely need to ‘train’ ourselves to do so because we aren’t born good thinkers. All in all, we definitely need to learn to not confine ourselves, as much as possible, within boxes of unchecked biases.

Futurism, Personal Opinion, Philosophy, Technology

Can machines think? 

A unique thing that a future AI will most likely do is to ‘decide’ for itself, and to ‘learn’ from the consequences of it’s decision. The process has already begun. So what different will it be from humans, if an AI can ‘decide’ and ‘learn’ from available inputs and memory? Should we still call their cognitive algorithm ‘Artificial’ intelligence, or simply “Intelligence” like we call our own?

But first, let’s have a look at how we think as individuals. Suppose we have a notion in our brain as a result of a perception of some stimuli that leads to it, such as visual ques or auditory ques of someone’s speech.  Subsequently, when the turn comes for us to either act or form an opinion or a stance about it, we try to rationalize it, justify it or either scrutinize it based on our memory and knowledge (or beliefs) before coming to a calculated or an emotional conclusion.

If you haven’t watched this movie, leave everything and watch it now!

Even lower mammals think. They rationalize their actions only through a much simpler algorithm: Heuristics. Putting it simply, heuristics can be said to be prompt justifications based on emotions such as fear, hunger, attachment, lust and so on or based on readily available impromptu memory. Humans probably just have an added dimension to this algorithm, with the ability to separate emotions from pure cognition most of the time. It’s evident that such a quality is not inherent, but rather learned. Nonetheless, it seems that only humans have the capacity to learn to such an extent so as to be able to travel to outer space or to be able to harvest the energy of an atom.

Pure cognition helps provide an additional, learned dimension to the algorithm such that we have been able to cooperate with other members of our species, learn written languages (spoken language is innate but written is always learned, no wonder the focus on literacy rates around the globe), do advanced mathematics, learn to cook food with style, do art, train to understand science, build aircraft and so on. We are just like other mammals who can be said to think, but with a much more complex algorithm. Say, when compared to Rhesus monkeys who also do think, live in social groups and have opposable thumbs like ours, but do not have the capacity to learn as significantly as we do.

In that sense, dogs think as well. And most of us must be familiar with their behavior. Throw a ball for them to fetch, they’ll retrieve it back. Hide the ball again, they will search for it and sniff around your body because they have the memory that you last had it. If you show your empty hands to their eager eyes, they’ll then start to sniff around the lawn closest to where the ball had landed previously. This is basic mammalian algorithm. Food is here – eat food. Food isn’t here – search for food. Food was here, now no food – search where food was last seen. Then repeat this cycle, until new food is found. Forget about the old food.

We do this too. But we have a higher level of problem solving ability, because our algorithm is much more complex as I’ve mentioned above. In the dog’s algorithm above, it tends to forget about the food of interest if a new food is found. But we humans on the other hand, tend to keep thinking about the object we have lost in the distant past, the food we were denied and keep looking for innovative ways to either search for the same food, or to be wary in the future so as to not commit the mistake of losing the food or object in the same way or to eventually learn to assure plenty of food by learning how to farm! You may have noticed here, that unlike our canine friends, we plan for the future and think ahead. We see the bigger picture, when a dolphin or a chimpanzee simply cannot. Their solipsism is limited to survival or hereditary kinship, while our solipsism can go beyond survival to wonder about the diseases that kill us or even so far ahead to wonder about the stars and planets that have no connection with our immediate survival; all when simultaneously co-operating with other individuals of our species who aren’t even siblings in any way.

Another great movie that portrays a successful outcome of a textual-auditory Turing test. A must watch.

But a crucial similarity between monkey and human cognition, is the process of decision making and learning through experience (detailed memory), regardless of the vast differences in cognitive capacities. What a machine could not do till now, even a cognitively less developed dog could do. But that is, interestingly, changing.

Most machines and software in use today, can think and make decisions. But their decisions are seemingly redundant. They don’t learn much. However, a new approach to programming and computer science these days is calling in a new era of ‘machine learning’. There are already ‘bots’ circulating throughout the internet which alter their codes to suit their website or server’s agenda. And they learn from previous failures to be able to improve in the future. Some computer viruses may be doing that as well. Notable projects include Google’s Deep mind, Apple’s Siri and Microsoft’s Cortana; are all being developed after every launch so that they increasingly cater to the customers by learning about their relevant behavior patterns and making accurate decisions based on it. They are only getting better by the year.

I remember Noam Chomsky, who is foremost a Cognitive scientist, replying to a journalist who asked him “Can machines think?” by saying something like “I think to ask ‘can machines (computers) think’ is like asking ‘can submarines swim?’. Computers were designed to think.”

According to him, since computers were designed to think, and to calculate one has to think, they’ve always been thinking. It’s just that they couldn’t decide on their own or couldn’t learn on their own without some kind of human input. We witnessed the attainment of this decision aspect in machines which spearheaded us to a digital age from an analog one. The next step is that we may witness machines daft with the learning aspects as well. These two faculties, which we’ve only been accustomed to witnessing in humans and lower animals, we will possibly see it unfold in inorganic machines who can actually think just like us and often become indistinguishable from (or even superior to) humans. And this thought may be creepy for some of us because after this, our inventions will have taken over the very definition of our species, Sapiens, itself (Sapiens = Wise/Thinking). If so, we may no longer have the sweet comfort of becoming the only intelligent entity on the planet.

Take this idea for a moment. By just looking at a person talking to you, can you tell for sure whether they are either thinking like you are or simply are just involved in the conversation? Can you differentiate their affect (emotions) from their cognition (thoughts) right there and then? Can you tell whether their thoughts are spontaneous or pre-programmed? I guess not. And we certainly couldn’t do so, if for example, humanoids passed the turning test. Now who is a machine and who is not? Are we only calling ourselves machines just because we are organic? Why are our cognitive algorithms called ‘thinking’ but not that of machines because of this organic bias? Are we not organic ‘survival machines’ as well?

Questions may also arise as to whether or not to call them ‘artificial’ anymore because of their ability to decide and learn on their own. For a robot that passes all levels of the Turing test (text, auditory, audio-visual), how can we tell by simple interaction that they are thinking or not thinking? Can you tell then, for certain, whether a pedestrian you collided with while commuting, was a humanoid or a human? Other issues may come about after that, such as rights and cognitive egalitarianism (equal rights for all intelligent entities and such) but it’s beyond the scope of this essay.

So the whole point behind this write up was to highlight the idea, that before asking questions like “can machines think?” it may give us a better insight if we also entertain the question of “can we think?”. It’s an interesting matter to ponder about. Because seriously, can we really think or are we under the illusion of thinking? Or are we merely processing information and memory to form a conclusion or a specific reaction that gives us this illusion that we are actually thinking?

Futurism, Personal Opinion, Philosophy, Science

Death of the Biosphere? 

(Transformation of the Biosphere)

Why do we consider inorganic or organic materials made by humans as “unnatural”, when we do not designate the same term to mountains created by tectonic collisions or to elegant rocks eroded for ages by river water nor towards parasites such as Toxoplasma infecting the neurons of lower mammals and altering their behavior?

A typical answer to this question is that what we consider to be “unnatural” could be those entities formed as a result of a ‘sentient intention’ (a sentient being doing something intentionally). But if so, then even a courting swallow is sentient enough to manipulate twigs and leaves to build itself a nest, and not just that, but also decorate it with the intention of attracting a mate. But is a swallow nest “unnatural” to us? If say, the swallows were as cognitively enhanced as we are, from within their solipsism, would their nest be “unnatural” to them then? So perhaps the only option left to answer the question would be from an anthropocentric (human-centric) viewpoint?

If we are evidently products of a natural process, then isn’t it logical to also assume that our tool-making and tampering of ecosystems is most likely a subset of the natural process (i.e the universe itself)? Unless of course, a third omnipotent entity might be puppeteering our actions; of which there seems to be no justification whatsoever. The former notion fits more appropriately, when we also learn the fact that free-will most probably doesn’t exist and that we all behave and make choices as a result of our conditioned heuristics (if not trained to do otherwise). That is saying, in simpler terms, that our brains and with it our personalities and preferences are shaped by the random events of our environment.

For instance think about this; take a tissue from a certain person, Shyam, and clone him. Will the clone bear exactly the same personality as Shyam himself? Will the clone become the same Shyam or a different Shyam? Same can be observed from the differences found in personalities between two monozygotic (identical) twins with exactly the same genetic makeup. How is this, again, not part of nature? And how, also these conditioned individuals’ future decisions, moral codes, ethics, judgement, emotions not be part of nature? If humans decide to construct a tree-house, is it natural or unnatural?

Man man - Natural things
We’ve been taught from a small age about the dichotomy of Man-made and Natural. But I finally think we are mature enough to question this idea. If Man himself is natural, how are man-made things technically ‘unnatural’? 

It is perhaps possible that we may be viewing matters which concern the milieu in which we exist, frequently from only within our solipsism. Because in the blind eyes of reality, a plastic water bottle is as much a component of nature as the keratinous fur over a wild bear. They all come from molecules found within the universe, made from the elements of this universe, and released into the universe itself. It just seems that from within our own solipsism, we perceive a plastic water bottle as “unnatural” and grizzly bear fur as “natural”.

Another example is that we consider paper plates “unnatural” yet we do not often say the same for make-shift banana leave-plates used by ethnicities in Kerala. When both utensils are in fact used for holding our food, why is one considered natural yet the other unnatural? When bacteria protect themselves from viruses with Cas-9 immune system that is considered natural, and when we inject vaccines to survive from the harms of measles or polio viruses, it is somehow unnatural. But again, are they?

Moving on to the bigger picture, why do we consider even human-caused near-extinction phenomenons such as climate change to be “unnatural” when we considered the Ordovician-Silurian extinction (which wiped out more than 85% of life on the planet) some 430 odd million years ago as simply ‘natural’? If the extinction of the Trilobites or the Dinosaurs was natural, why isn’t the possible extinction at the end of this human era considered the same? It most obviously has something to do with our survival. Were it not about our survival, would we still look at climate change with grave eyes as many do today? (Even so, many others still deny climate change despite the fact that it may spell a radical change in human civilization or even spell the end).

Furthermore, would bringing back species of flora and fauna extinct due to human-caused events and reintroducing them to an ecosystem that has moved on without them, be considered natural or unnatural? Are we merely trying to bring back the Tasmanian tiger or the Dodo for our own solipsistic satisfaction or as a moral repent for our “unnatural” manipulation of our surroundings? Or simply, is it because we can do it and we want to see what it’s like to bring back that part of the biosphere which perished from existence for quite a while? In our general moral code, natural is generally considered ‘good’ and anything made by humans that is “unnatural” seems bad. We live with this dichotomy, probably thanks to our slowly evolving psyche which is clearly outpaced by exponential technological and scientific innovations that most of us simply cannot perceive from beyond our anthropocentric views.

Alvin Toffler is an American Writer and Futurist

The whole point of my series of questioning is not about the call for withdrawal of ongoing conservation efforts and the global efforts to try and curb the harshest effects of climate change. This essay is not about addressing such issues, but rather something else. From our species’ perspective going-green seems utilitarian and is, but an effort, to survive and grow. It is an effort of mine, to point out the flaws and paradoxes in just the rhetoric surrounding our existence in the nature around us, our survival and conservation efforts. An attempt to just try to present a supra-human insight on the very idea of human progress itself. I simply question whether we should even need to use the natural-unnatural dichotomy in the first place?

While contemplating about such issues, I can’t help but remember George Carlin. Especially that joke where he said that our rhetoric of ‘saving the planet’ was so out of touch with reality because the planet will still be here for millions of years, it’s us humans who are fucked! So we better be saying ‘save our species and other species’ instead of ‘the planet’. Because in my opinion, as much as our manipulating of nature is for our benefit and survival, it is also a part of the process of nature (of course we are affecting other species and ecosystems because of it) but the planet will stand even if we perish as a species from the planet. Something else may replace us, but that will obviously and definitely not involve us.

My ultimate point is that we may be already at a turning point give or take a few decades or for backup, centuries. Perhaps 50 years or 200 years from now, but in our 70,000 years of walking upon this planet, we are closer to the turning point than ever before. That too, exponentially!

The turning point I am talking about is not that which only relates to us humans, but that which concerns the entire biosphere. There may be a multilateral effect pattern after this turning point (or rather turning ‘period’), but I’d specifically like to talk about only two that interest me the most.

The first is that projection which leads to our species’ extinction. As simple as that. Whether a failure to cope around climate change, or a deterrent-bypassing nuclear armageddon in the face of possible wars for basic resources like water and failure of nation states. Or simply an asteroid strike (one with diameter of just 300 meters could start extinction level chain reactions in the atmosphere if it were to strike our planet). Another stream of extinction could likely be extra-human. Like a superior intellectual uprising after the Technological singularity, i.e machines that outsmart us (Sky net from Terminator) who suddenly decide that they are better without us and treat us like how we treat other lesser intelligent beings like ants when they come in the way of cooking food or building houses. This is a possible projection but it is more dystopian and depressing and surely humans, as survival machines, may fight to prevent such from happening. Only time will tell.

The second projection could also be considered an extinction, but I for once would like to consider it as a ‘transformation’ of sort. The singularity will most probably happen in this one too, but rather than the complete loss of our species, we may transcend into the digital world or cognitively marry in part with the AI which compute more creatively than the smartest of us currently do (Cyborgs; this has already begun; how many of us can actually live smoothly without an essential machine such as our smartphone or PC?). This particular projection is, in my imagination, the one most compatible with the Kardashev scale of progression of civilizations. Since we are currently somewhere between type 0 and type 1, the Transcendent singularity may possibly lead us to type 1 where we can effectively and completely manipulate and control all of the earth’s energy sources, including the weather itself (or even beyond to type 2, where we may enclose the Sun within a Dyson-sphere to harvest its energy).

The Kardashev Scale. Human civilization is currently between type 0 and 1. We are quite not masters of the planet………yet!

This transformation, may in part assimilate with the biosphere as we see it now, as what we call “artificial” selection or “unnatural” selection may be more dominant than “natural” selection itself. In simpler terms, literally “everything” could be now controlled by a certain class of sentient intelligent beings with precision. The dominant “force of nature” now most probably is not a series of random events but carefully thought up ones by thinking beings, for their own benefit. The previously fully organic biosphere may start to now intersperse inseparably with inorganic or semi-organic entities such as silicon or carbon bucky balls or even impalpable entities such as source codes and binary languages of programming rather than that system previously achieved only through the purely organic DNA. Now again, this might be a good time to interject my lingering question: Will this too be considered “unnatural” by talking-sentient beings like us? What aspect is not a part of nature and what aspect is a part, in this instance?

For those of you who are anime/pop culture fans, here’s an analogy with a thought experiment: Would the sentient Autobots and Decepticons from the Transformer universe be considered “unnatural” just because they are inorganic beings? Could they not be as much a part of the universe that they exist in as the Human characters that they befriend or wish to destroy respectively?

An Autobot and a Human from ‘Transformers: Last Knight’

So I think in the end, the word “unnatural” fails outside of our human solipsism. Beyond us, it probably bears no logical significance. It might be a semantically useful tool in rhetoric and motivation or for individuals who think going out for a hike might make them “close to nature”, which also may assign a transient sense of meaning to their brisk lives. But they are indeed unaware that the metropolitan concrete apartment inside which they reside, is as much a part of the universe and nature as the rocky hills of a certain national park that they enjoy hiking upon. The only difference is that we shaped resources in a different way to build an apartment, and tectonic plates utilized the same resources in a different way to build the rocky hills. We do it with intention, the seismic movement of earth has no intentions. But eventually, we’re both accidents upon the planet.

In the end, “we are all stardust” to quote Carl Sagan. Such realizations and thoughts give us an immensely broader supra-human perspective of existence and reality. So it’s always worth asking such questions. Daily things that we do, and ideas that we ponder about. What do they mean to us and what do they mean without us?

In this context, I’d like to sign off with two open-ended questions: Are we nearing the ‘death’ (or transformation) of the biosphere? How do we justify environmental conservation efforts, if even human progress is technically ‘natural’?

Egalitarianism, Personal Opinion, Philosophy, Rationalism, Secular Humanism

“But it’s their culture!”

As a Humanist, I believe ethics and morality should be consequential. To be judged by the outcome of collective human actions rather than from a virtuous standing.

So certainly preserving a particular faith, cultural, ritual or political practice in place of reason, freedom of speech and fundamental human rights seems very inconsequential.

This isn’t just a mere personal hunch. We can take important lessons from history that in doing so (preferring harmful cultures/traditions over reason), more harm can be brought upon Humanity than good, as seen across many different cultures and societies.

Sati pratha, Caste System, Slavery, Colonialism, Religion, Political fundamentalism, Female Hysteria, Witch Hunt, Spanish Inquisition, Xenophobia, Rwandan Genocide, Ethnocentrism, Ethnic cleansing, Cult worship, Capital punishment, Ban on abortion, Ban on contraceptives and what not! If all these teach us one thing, then it is the idea that it is much more beneficial for everybody to adopt reason over lack, thereof. I admit that the practice of reason is hard for everyone. But nonetheless, it’s worth a shot.

To modify our cultural practices to suit the progressive and liberal zeitgeist seems like the best option. For instance, if we hadn’t done so in some way then we’d still be burning widows in Pashupatinath and beating Kamaiyyas because they ruined a batch of maize. Because even if we are in denial, sooner or later our societies will have to be subject to that change regardless of our conservative sentiments.

If irrational practices can change to suit such values, then good, but if it refuses to change, then it will have to go sooner or later. But people like me think sooner is much better than later. So why stop voicing against them even if the majority have no problem with such?

“But it’s their culture” is a perfect example of a serious kind of Genetic Fallacy. It’s a logical fallacy, which may appeal to our emotions by appealing to historical sentiments for the short term. Whereas in the long run they lose their rational significance.

This is why I consider Voltaire as a great champion of farsightedness. As my opinion resonates with some of his in his “Letters concerning the English nation”. Because history has shown us that Voltaire was right about many aspects of the collective human condition.

And finally, I’d want to sign off with my all-time favorite slogan: No idea is above scrutiny, no human life is below dignity!

Personal Opinion, Philosophy

Life advice and Personality 

Speaking about life advice, no matter how well intended they seem like, may not apply to us universally.

Why I don’t bother to give life advice to anyone without them asking first for it, is because it’s utterly futile. They’re not going to listen and even if they do it may not apply to them.

What I think we fail to understand about human behaviour when approaching someone to advice them on life, is the fact that in terms of achieving our goals there are three broad categories of people in general.

  1. People who like to constantly stay in their comfort zones.
  2. People who can with ease sacrifice their comfort zones and tend to avoid it while trying to attain a goal.
  3. People, who like to stay in their comfort zones most of the time but can sacrifice it when the need arises.

Also none of these are ‘good’ nor ‘bad’ qualities, but rather just qualities of personality that simply are there. (And do note that this hasn’t got anything to do with hard work, emergencies or urgencies).

We can find satisfied as well as unsatisfied people carrying any of these qualities in any expansion of human life. So one advice may not work for all because first of all, we need to determine their type of personality.

And how do we do that? By a deep understanding of their nature, possibly through an open and an honest conversation.

Perhaps this is one reason why people who understand each other in any relationship and any bonding processes are less likely to fall apart. And I also would like to assume that perhaps this is also one reason why there are people who are satisfied with their lives in more than one possible way; simply because they got a suitable life advice from someone, somewhere who understood basic human nature. On the other hand, it could also be possible that the individual understood himself and made life choices by himself regardless of the people around him.

So identifying individual personality types seems pretty important for us as individuals, as well as others surrounding us while suggesting something about living. Best life advice, to me, are those that are holistic and not too general.


(But at the end of the day, this just my own philosophical postulation, and not authentic behavioural science or established philosophy of life)

Philosophy, Rationalism

Is Dualism Dead?

Spoiler: I think we can confidently say today that the basic idea of Cartesian Dualism (dichotomy of mind and body) is effectively dead and debunked, if it is used to explain the nature of reality.

In addition, so is metaphysical solipsism, effectively dead! Methodological solipsism, may not be (there’s a difference) so hang on.

Cartesian Dualism covers a similar mind-body dichotomy concepts as posed in Adhyatma or Bedanta in the Vedas or Buddhism (In almost every spiritual faith system). So we do not have to give each theological variety a special consideration or a higher ground in philosophy, as the core idea is like that in Cartesian Dualism itself.


The mind (consciousness) as we know today, is a product of observable, material phenomenons involving sodium, calcium, chloride, neurotransmitters, action potentials, neurons, nerves and their intricate arrangements and their interplay with myriads of different kinds of external stimuli. This much is already too certain to not be considered for explaining the origin of the mind (but unfortunately not enough for explaining as to how it works), as the evidence is heavy. Let’s try to find consciousness without these shall we? Or injure a brain-stem region and not be unconscious? Or alternatively, let’s try to code for a supposedly self aware AI without silicon chips, circuits, photons and electricity. We cannot even fathom such feats can we?

Back in the time of Descartes or even before that, say during the period of inception of the great spiritual faiths, this much was not known so their idea of dualism is understandable and intuitive. Nonetheless, empirical evidence is counter-intuitive and possibly the reason why dualism still lingers around much of the philosophical community. I admit, sadly, that critical understanding of philosophy, without confirmation biases is hard and the idea of dualism, despite of being fallacious is pleasing.

Now Metaphysical Solipsists and Idealists will argue that observable evidence is also a result of our conditioning and experience (subjectivity) so cannot be relied upon, but that is a circular argument, such that this doesn’t explain that if reality were to be a product of our minds, why others around us experience similar subjective things not much differently than we as ‘self’ do? Cognitive science can study subjectivity better and better with each passing day and has been producing observable, reproducible results with a considerable degree of universality. So the idea that subjectivity cannot be studied rationally is long expired.

In this respect Idealism, Dualism and metaphysical solipsism will not carry much ground for the purpose of explaining the nature of our consciousness. I repeat, useless for the purpose of explaining but useful for the purpose of questioning, since philosophy can be considered as the art of questioning. It’s not, as a whole, completely dead like Stephen Hawkins and some notable physicists have declared. Philosophy may not be useful for answering the questions it asks, but it is also important to remind ourselves that formulation of every hypothesis follows questioning borne out of curiosity. It is again vital to understand, that curiosity cannot on itself answer or satisfy the questions it asks. So a system or a method is required, to rid the observer of subjective biases and conditioning that could skew their empirical observation.

Now some philosophers like to interpret Dualism with today’s scientific understanding, as in relating the mind body dichotomy to highlight the bridge between subjectivity and objectivity. In my opinion, that may be understandable for many intellectuals, but I think it falsely justifies an ingenious but an expired idea. I think we can find better ways to ask questions about the nature of the Qualia or simply our experiences borne out of our subjective perceptions without obscuring the already clear link between the brain and the mind.

To sum up, I think instead of dedicating our time and effort to revolve around in the circular arguments posed by dated concepts like dualism, philosophers and scientists may better utilize their resources and time, if important questions about the very details of the origins of the consciousness and it’s functions is asked instead.

Altruism, Philosophy, Psychology, Rationalism, Secular Humanism

Effective Altruism

When you see a beggar or a homeless person on the street asking for some tip, what do you do?

Most of you reading this tend to give them a change or two as you pass them by without even giving your action a little thought. Others tend to be undecided and perhaps depending on mood, sometimes give, whereas at other instances dont. On the other hand, there are others who never give out change at all for a variety of reasons known only to themselves; whether they are them selves broke, whether they don’t want to lose hard-earned money, whether they are emotionally indifferent or whether they think it’s not an effective move to solve the beggar’s problems once and for all.


Let’s imagine for a while that a neutral onlooker is observing each person from their respective groups as they pass a beggar. The first one will be judged as benevolent (and rightly so), the second one as hesitant and the last one as a miser or ‘kanjoos’ or ‘daya nabhayeko’.

Now this blog is actually a focus onto the latter non-giving group of people. I’ll try to go even deeper into this cohort of interest. A subset within the group, who do not believe in charity that has no potency for change (especially the latter of the last group). Thus the term effective altruists.

I’d like to consider myself an effective altruist even though I haven’t really participated in any major philanthropy so far. I’m one of the third group, for I simply do not think that giving a man a fish for a day will solve his problems in any way.

Now you may argue in this age of individualism, that giving them money for a day will make you “feel better”. Better you may feel, but the short-sight in this way of thinking will not alleviate the number of beggars in the street but in fact may even make matters worse for them by encouraging begging. You create a vicious loop of begging instead.

This analogy was my effort to help readers grasp the concept. It would surely help if you all were to briefly learn about the very psychology behind philanthropy.

What is altruism and why do we indulge in charity?

Altruism is not anthropocentric as most people tend to believe. The meaning of being a human is not defined solely by the joy we find in giving. To give is not only being human. To give, is actually being an animal as altruism can be observed in hundreds if not thousands of species, vertebrate or invertebrate.

Perhaps the best explanation of biological altruism has been provided by evolutionary ethologist, Richard Dawkins, in his book The Selfish Gene. He explains that we are all survival machines for the residing genes which code for our bodies, and for the genes to survive, the survival machines must be kind, empathetic and protective even at the cost of one or two individuals so long as their genes are safely passed on to their offspring. This explains why parents rush into a burning building to save their child and why animals give out warning calls when they spot a predator and why we feel empathetic towards the plight of other humans.

All major and minor acts of philanthropy throughout human history is based on this single fact. This is our urge to survive. We act kind because we want the human race to survive. It’s the same principle even when we talk about the ‘collective good’ or ‘greater good’, be it borne out of religion or by other means. Our psychology has been shaped much in the same way, so as to cater to the survival of our genes, when it come to donation.

So why think while giving? Give away then! Right? Not entirely.

Bring in reason and evidence and we have effective altruism

Like I said before, the meaning of being a human is not solely defined by our capacity to empathize. It’s rather defined by our ability to think and reason and of our ability to make things work when it comes to manipulating the nature around us for our benefit. This is what separates us from other species (often wrongly used by anthropocentrists to glorify our illuded superiority). So there is a reason why the word effective is emphasized.

Compared to the act of just giving away money or charity, the act of doing so effectively can matter a lot. First of all it ensures that the money you spent is able to provide maximum good or benefit for that sum. A utilitarian mindset. Secondly, in this age of information overflow, fact-checking and empiricism is ensured so that you are not hoodwinked by fraudulent or corrupt organizations; and lastly, to gain the satisfaction that your work is actually helping to change people’s lives for the better, because you were smart enough to think responsibly before setting out to donate.

Effective Altruism or Effective Philanthropy, as a means to meet charitable ends that was spearheaded by the moral philosopher Peter Singer through his two books The Life You Can Save and The Most Good You Can Do, is gaining popularity especially among self-aware, conscious and responsible people and is being used by reputable organizations such as Oxfam, UNICEF and GiveWell. Some core aspects of this new philosophical movement are discussed briefly below.

Evidence Based Philanthropy

Effective philanthropists, whether individual people or organizations, opt for an empirical approach while giving away charity. It is imperative that one research thoroughly and usually adhere to Randomized Controlled Trials, meta-analyses, research evidence and the general scientific consensus in an effective altruism.

This is to prioritize the area of charity so that when you spend your money, the sum that you have paid is likely to bring about maximum benefit. Some notable examples are Bill Gates and Elon Musk.

Bill Gates

Bill and Melinda Gates through their foundation have delivered billions of dollars worth of effective charity to fund vaccines, infectious disease prevention programs and research in developing nations, as a result of which millions of children world-wide recieve essential vaccines for free or at lowered cost. The end result: lesser infant and child mortality rate and greater national productivity.

I’ve brought up Elon Musk as another example because unlike Bill Gates, his philanthropy is mostly focused on individual research primarily in technology so as to inspire pioneering innovations among enthusiastic scientists, science-entreprenuers and researchers. This is to make a statement that effective altruism is not only limited to delivering responsible empathetic charities to poor people, but it’s scope can extend to any activity which helps towards the betterment of human (or animal) lives.

Consequential Approach

Effective Altruists are consequentialists; i.e those who know that the consequences of their actions are the only basis for judging whether their actions can be deemed right or wrong. That is to say that if you donate for a particular cause, and the end result bears desired benefits, then your action can be rightly deemed effective or successful. In short, their ethics is consequential means that they are to be judged by the results of their actions. And in most effective philanthropy, since the means is scientific and fact-based, the end is often successful. So I’ll again exemplify Bill and Melinda Gates foundation, as they are perhaps one of the most influential and ethical effective charity foundations that have actually made significant positive changes in people’s lives.

Egalitarian Mindset

For an effective altruist, no human is above another. In practice it may not be consistent, but most tend to consider that people in a developing nation have equal value to people in their own community. While most of their effort is focused on reducing human suffering in a selfless but thoughtful manner, some altruists may also argue the case to extend their moral compass towards ethical treatment of animals.


Since money is hard-earned and doesn’t come easy, it is common sense to be strategic and careful while trying to spend it, even for a noble cause. For a utilitarian approach, most effective altruists go for the cheapest commodities and materials that bring out the most benefit for their cause. Most nowdays even think in terms of QALY (Quality Adjusted Life-Years) saved per dollar and DALY (Disability Adjusted Life Years) reduced per dollar. These are useful indeces used to assess tge improvement in tge quality of people’s lives. Whatever saves the dollar but still maximizes the benefits, effective altruists tend to go for it after much calibration.This allows money to be literally ‘well spent’.

Cause Prioritization

Cause is prioritized and usually a single cause is taken into consideration. This allows room for proper planning of logistics and makes it easier to assess the end result, i.e to measure it, and to work step by step to deliver the best services or programs.

For example, instead of donating money to poor people, effective altruists focus on certain core aspects as to what a certain community is most at need for (such as vaccination or family planning) and deliver accordingly to improve that sector first before moving on to other ones.


Most vocal philosophical criticisms of Singer’s Effective Altruism dig at it’s utilitarian aspects, while they do commend the motive it carries along. As John Stuart Mill’s utilitarianism goes, as I’ve mentioned above, this is the act of doing the maximum amount of good. Critics argue that utilitarian views in philanthropy may seem strategically beneficial but in the end it may even miss, during the process of weighing out options, quite a lot of important sectors that may require more attention even if it doesn’t look so on paper.

One important area of criticism is on the over-reliance of people who call themselves effective altruists, on third party institutions (or ‘evaluators’ such as Charity Navigator) who do their research for them instead of the altruists doing it by themselves. This could at times be contrary to the core principles of effective altruism and this reliance is in itself a weakness of this otherwise noble concept.

A Lesson To Be Learnt

So let’s come back to the initial question: When you see a beggar or a homeless person on the street asking for some tip, what do you do?

Reference and Further Reading…..

If you want to learn more about effective altruism start from some of the links provided below. Also if you are not satisfied, there are a number of links on some valid and some invalid criticisms of effective altruisms that you can go through.

  1. Biological Altruism, Stanford Encyclopedia of Philosophy
  2. Effective Altruism, Wikipedia
  3. The Live You Can Save: How To Do Your Part To End World Poverty – Peter Singer
  4. The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically – Peter Singer
  5. Altruism, Wikipedia
  6. Effective Altruism, Website
  7. Basics of Altruism, Psychology Today
  8. Bill and Melinda Gates Foundation, General Information
  9. Combining Empathy with Evidence, Center For Effective Altruism
  10. Ted Talk – Peter Singer: The Why and How of Effective Altruism
  11. Effective Altruism and It’s critics, Journal of applied Psychology 2016
  12. Philosophical Critiques of Effective Altruism, By Prof Jeff McMahan
  13. Effective Altruism Has 5 serious Flaws, Avoid it and be a DIY Philanthropist – Hank Pelliser 
  14. Altruism, The Selfish Gene, Richard Dawkins
opinion, Philosophy

Authority vs Learning

In the study of logic and in philosophy, a logical fallacy is any argument that is erroneous in its logical structure and its effort at reasoning. That being said, this blog of mine will be focusing on one particular fallacy with relation to the matter at hand: Appeal to Authority or (latin) argumentum ad verecundiam.

So what is it? The appeal to Authority fallacy, simply put, is an erroneous argument made in rhetoric and education such as ‘something said by a person in authority must be true, because of the person’s authority/expertise on the field‘. At some times, statements such as this might be reasonable, for instance while talking about an expert scientific consensus on something empirical (taken that you have a good understanding of the subject matter), but in most other instances is likely to fail in logic.

You might be wondering as to why I’ve chosen this particular fallacy among a myriad of others? As you already might have had some hint from the title itself, my intention today will be to address the impact of authority in the very process of learning in the 21st century; the good as well as the bad, with a major focus on the health education sector within South Asia. Being a medical doctor who recently graduated from a medical school in Nepal, I’ve had varying experiences with authority myself. So may I start from the positive side?

The Benefit…….

Especially in the medical sector, if you ask any professional or student in most institutes in most places of South Asia, they will opine that authority is a must in this discipline. There is a set hierarchy, from the student being in the bottom most level and the professors residing at the highest. It is imperative for the student to strictly obey the ones above him until he himself becomes of authority one day.

That sounds reasonable doesn’t it, at least for the purpose of the structural stability of a service system? Imagine what would become of the military were authority to be abolished and every cadet and lieutenant were work on their own? There would be no order, and chaos would ensue in the line of battle.

Since the very inception of the discipline of medicine in antiquity, knowledge has been passed from an experienced and skilled master to an inexperienced and less-skilled apprentice or pupil. This is true of any discipline if we look back in history. What’s a better way to learn than to do so from someone deemed wiser than yourself?

This way is effective for many reasons. One being that for maintaining order as I’ve mentioned earlier on, and another being that the process of learning is faster with a master than when compared to learning without a master. Plato, Aristotle, Lao Tse, Hippocrates, Chanakya, Susruta Ibn Sinha, Ibn Batuta, Einstein, Alexander Fleming, Jonas Salk and the like all had teachers from whom they sought their wisdom and knowledge. Imagine the rate of human progress if these personalities were to have had to learn everything by themselves, without any sort of tutoring.

It is possible to learn something without a teacher or an expert, but the process will be slower. And it is naturally implied in a teacher-student relationship that the teacher has the immediate authority of the knowledge the pupil has come to gain, until and unless the pupil reaches the level of the teacher.

So authority is a human construct and is rather useful and thus cannot be completely discredited as only adversely affecting the process of learning. Authority eases the process of learning.

The Problem…….

I will reiterate what I wrote above, that it is naturally implied in a teacher-student relationship that the teacher has the immediate authority of the knowledge the pupil has come to seek.  The problem with this notion, in the 21st century at least, is that it is not always consistent.

Getting back to South Asia, many inconsistencies can be found in the sector of medical education. First of all, respect is demanded from the students rather than being implied. Second of all, it is assumed that pupils are consistently, completely null about the subject matter. Thirdly, questioning and academic dissent is somewhat frowned upon and lastly, teachers are not always updated themselves for them to pass on sound knowledge to their students.


Go back to the part where I’ve made the military analogy in order to explain the usefulness of authority for the purpose of maintaining order. While it may have been a useful comparison, and while it may work in many instances, it is not always consistent either. Because the discipline of medicine consists of both empirical science and art; that is to say both rational and creative thinking. Medicine needs more thinkers and philosophers and lesser orderlies. Authority and respect here is better if implied and not enforced like it has been normalized in majority of South Asian Medical schools.

Thinking for oneself, while also remaining conscious of the beneficial aspects of authority, is of utmost importance in today’s age of the internet. This is to ensure that we do not sway away from intellectual humility while we simultaneously learn to be creative thinkers.

Anecdote and Evidence…….

Such is the culture out here in the sub-continent, that we are taught to be morally obliged to a certain authority from an early age. Respect priests, respect your parents, your teachers or your relatives no matter how mean their personalities may be because it is assumed that the ‘collective’ is better than the ‘individual’. The enforcement and demand of respect or authority is validated by the society, so it’s nothing surprising for a teacher in this region to demand the same from their pupils.

Reflecting on my own life as a student, it is also important to note that my experience with different types of teachers and their differing personalities and qualities may also have fostered my understanding of the impact of authority in my region. Where there were encouraging and enthusiastic teachers, who implied respect and enjoyed teaching, I enjoyed the subject and did rather well as they were rather interactive and the classroom environment was less tense. Same was true for my peers.

On the other hand, in sessions with sulky or strict teachers who demanded respect and enforced authority despite the arrogant, bothersome or troublesome nature of their teaching methods, I used to perform less well as I would not enjoy their teaching methods for the same reason I wasn’t appealed by their demanding personalities. The atmosphere of stress created around them called for a self-study and self-study without an effective teacher is actually slower than expected.

It is not just my anecdotes which suggest that teaching methods do directly affect students’ performance in a subject, there are plenty of research which justify the scenario above. Implicit respect and authority (teacher-student interactive methods) in a class makes for a a much better and effective learning process than when compared to classes with enforced or demanded respect or authority (teacher-centered methods).

Viva Voce…….

For every medical student in the region, these two syllables are synonymous to what we can rightly call our worst nightmares! Such is the fear that Vivas in my part of the world deserve their own memes like the ones shown below.


Such is the nature of reality out here, that in most institutions, decisions for scoring in vivas are completely upto the examiner himself. If a student is not well dressed, if a student has a habit of making illustrative hand gestures, if a student says the word ‘however’ too much, if a student fumbles, if a student is in a state of nervous breakdown (which is natural and nothing to be ashamed of) he/she might be liable to failing as it may not be agreeable to some examiner’s standards, which is completely irrelevant to the candidate’s academic and curricular potentials. And the fact which upsets me the most, is that there are crude sadistic examiners who entertain themselves on students’ misery.

While Viva voce are effective strategies to help examiners screen students who display sufficient knowledge on a particular topic of interest, they pose the risk of unnecessary and preventable subjective biases, if not designed systematically. This could lead to even the most hard-working, talented and knowledgeable students to fail their exams for some or other subjective variable. Vivas need to be modeled strategically and they need to be rid of biases for them to be as effective as they were meant to be.

Old big shot professors from reputed institutes, big names, respectable people who set an air of dominance and ego in the room they reside in. When a student makes a small mistake, many examiners blow them out of proportion and humiliate them in front of their faculty and peers, which only discourages them. When a student coughs too much examiners get irritated and biased scores are thrown around. When an examiner makes a mistake and a candidate corrects him, marks may be deducted for he dared to question authority. If a patient lies about his symptoms in bedside vivas in clinical exams, the poor candidate is liable to fail even if he is well-versed in both theory and practicals.

Like I said before. No standards, no criteria, no objective rules; just plain subjectivity hijacked by authority, maintains the thin-ice on which a student’s career depends on. A very troubling revelation I came to experience during my final year of medicine. Although I was lucky enough to escape (pass) unscathed, it was rather unsettling to witness close friends and able peers who fell victim to the flawed model of examinations.

The Take-Home message……..

During my contract as a medical officer in TUTH Emergency, I happened to come across many foreign electives and exchange students who’d been posted there. Talking to them and from research, I came to learn that failing is not considered as humiliating as it is out here, in their part of the world (Sweden, Australia and New Zealand to name a few). Because I came to learn that in their region, failure was seen by many as part of life but in our authority-worshiping region, failure is constantly associated with social stigma.

And autority only makes it worse. That which is said by an old professor is taken as correct many times even in light of new updated evidence, in most colleges. When we answer questions, ‘it has to be exactly as written in a book’. When some students dare to think for themselves, they are humiliated, especially in Vivas and sometimes are even intentionally failed. Suicide rates among medical students in India alone is one of the highest in the world and I’m not even including data from other SAARC nations. The medical sector itself is a stressful field of service, and add to that an ever stagnant authority-conforming mentality, the stressors double in proportion.



Our region may be one of the fastest growing ones in the world, with rapidly advancing healthcare facilities and health tourism bringing in foreign wealth at an exponential rate, but what is the value of such growth when we ourselves are falling victim to it?

The take home message, simply put, is that the sole purpose of authority should be to ease the process of learning. Once authority itself becomes an obstacle or a pain for learning, we tend to face a troubling regression in the quality of all forms of education, let alone in that of medicine.


  1. Teaching Methods and Students’ Academic Performance
  2. Your Logical Fallacies
  3. Suicide among medical professionals in India