Hey there, Thanks for stopping by. Click the Blog tab for the recent posts.

So what is my take on AI ? Before 2016 my knowledge and perspective on the concept of Artificial Intelligence was entirely based on science fiction novels, movies and television with one single exception. The only formal academic insight I had on the field was through the great lecture series by Prof. Patrick Henry Winston under the topic Artificial Intelligence. Someday I will talk about how that changed my perspective and pushed me more towards AI but for now lets just say that before 2016 I was mostly in to the picture painted by Hollywood where the AI is depicted as something as cool as S.A.R.A.H from Eureka or something nightmarish as Sky-net from Terminator. Up until I graduated all I knew of the field was the world must be pretty close on creating one of those two extremes. Little did I know that we are very far away from the goal of making a program which will become sentient.

It was a childhood dream for me to build an intelligent robot, which I imagined in the form of an intelligent house like the earlier mentioned SARAH. But the interest was lost in the midst of collage and university and work until it was rekindled in 2016 with my focus changing from Telecommunication to Machine Learning. Through the year my perspective of AI changed few times. I am not entirely sure whether it was a good thing or not, but I prefer to believe its because I am still wetting my toes in the vast ocean of AI, therefore is changing with different ideas I come across on the subject as I progress.

Up until the mid of 2016 I saw AI as a reality that was possible through programming. The concepts of super intelligence and ultimately singularity were in the category of possible sciences. This was not based on any practical world knowledge but on the movies and stuff. So I was an ignorant advocate on the side where someday the end of humanity will come as the Judgement day depicted in Terminator. But slowly as I read more about AI, where we are, what we can do and what we can hope to do; the idea of a sentient machine was going further and further away to the future.

In the mid 2016 I was pretty sure, may be moronically, that machines will never become sentient. But one day as I was thinking I realized that may be making a program intelligent and the idea of a robot which is sentient are not the same thing. The idea came to my mind through my religious background where the life/being is a continuous series of rise and fall of thoughts. According to what I believe in, at death the thought pattern just go from last thought of this life to the first thought of next life.  So why not that thought pattern jump from this body to a complex enough machine body instead of a biological body ?

I am still very very very new to the field thus my opinions might not carry any weight what so ever. Yet I like to view the reality of AI through three separate approaches.

  1. Do I think the Armageddon will come through machines ?
  2. Do I think we will ever be able to program a machine to the point it will become self conscious ?
  3. Do I think there will ever be machines which are sentient ?

The 2nd and 3rd questions, I will take separate articles in future to discuss. But for now the answers are, yes and no and yes.

So why do I think that the machines will bring Armageddon ? Of cause I am not going in to the dramatic ending where the humanoid robots will walk the streets with machine guns and enslaving man kind. I am talking about the end that can come from the fact that a program doesn’t have the sense of right and wrong (morality). Yes, true, we can program it with a huge load of rules on this is right and this is wrong. But firstly, we humans still cannot agree on a universal concept of morality, we are still slaves to our belief systems in which we were raised or the religion we believe in. And secondly, even we, the humans, come to dilemmas now and then when taking decisions where the concepts like ‘greater good’ and ‘every life matters’ come in to play. If a human takes a wrong decision the impact would be more or less local. Looking at the world history there hasn’t been any single person who has caused an extinction level action. But a program has this ability, a command as simple as ‘save the world‘ can have many different outcomes. The program might take it literally and try to save the world (Earth) by logically reasoning and deducing that Human race is the cause for the fact that earth is dying, and thus take the decision through logic that the best way to save the world is to eradicate human race and the efficient way to do that would be by releasing a biological virus. So the point is, at this I believe that in future, giving too much control of our infrastructure and other systems such as military arsenals to sufficiently intelligent programs is efficient yet not without risk. May be we will someday come up with means to give programs more common sense, but still, we humans are still destroying each other for numerous reasons which in our own point of view are morally correct. In such hindsight it is not too far fetched to be fearing the danger of immoral programs taking decision on our day to day lives.



2 thoughts on “Home

  1. Studying computers myself and while I think that AI is very interesting and needs to be researched, I agree with you. I’ve used the exact same argument as you mentioned about AI seeing humans as the factor in the earth being wrecked (Because it’s the truth, lets be honest). Therefore, if AI was made to protect earth, would it not use it’s logical binary – it’s boolean or yes or no after analysing data to destroy the thing that created AI and the thing that also destroyed the earths living resources? After all, it can’t have the sentimental attachment that humans have which make them hesitate before pulling the trigger – which give people a chance to save themselves. Without the emotional bias and judgement getting in the way – a logical computer will not hesitate if it could make decisions without needing approval – if it was clever enough to make decisions for itself on a massive scale, or to realise that humans are the issue. Very interesting to come across this blog post and interested in reading more. Subscribing!

    Liked by 1 person

    • It seems that the research on to practical AI is progressing in an accelerated pace while the research on to ethics, safety and existential threats surrounded with AI is being focused less. Like apes who might have been the most intelligent species once upon a time before the dawn of homo sapiens now do not have a say in how they should be allowed to live in the world, a superintelligence might take the freedom away from us too. Therefore I believe it is worth while to focus more and more on ‘what if’ questions of AI now itself before someone actually succeeds in inventing the last invention of human race.
      And Thank you for the nice comment and subscribing. 🙂


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s