Exploring NLP 05 -Minimum Edit Distance 3/4

In the previous article we discussed how to use a grid approach to calculate the ‘Levenshtein distance‘ and I mentioned that there is a shortcut to fill the grid. Lets see what that shortcut is.

We will get the same example as the previous article. transforming the word ‘five’ to ‘four’. If we go through the method discussed in that article we end up with

ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2 2 2
r 4 3 3 3 3

So what is the shortcut.

Step 1 – Create the empty grid

ε f i v e
ε
f
o
u
r

Step 2 – Fill the first row and first column from 0, increment one at a time.

ε f i v e
ε  0 1 2 3 4
f 1
o 2
u 3
r 4

Step 3 – This is where things are bit tricky. We have to fill one cell at a time by getting the minimum value of the three cell (west, northwest and north) and adding one to it; EXCEPT when the cell’s column and row letters are the same. In that case we fill the cell with the northwest cell’s value.

Lets see by filling the example grid.

ε f i v e
ε 0 1 2 3 4
f 1  x
o 2
u 3
r 4

We need the value for ‘x’. We cannot apply the rule because its the exception (both column and row letters are same; ‘f’). So we fill the value ‘x’ with the value of the northwest cell.

ε f i v e
ε  0 1 2 3 4
f  1 0
o  2
u  3
r  4

Now lets fill the next cell.

ε f i v e
ε 0 3 4
f  1  x
o  2
u  3
r  4

To fill the ‘x’ we see that the letters are not the same (‘i’ and ‘f’). So we can apply the rule without a worry. We consider the three values (0, 1 and 2) and then get the least value (which is 0) then add 1 to it. So the value of x is 0+1 = 1.

ε f i v e
ε 0 1 2 3 4
f 1 0 1
o 2
u 3
r 4

Using this method we can quickly fill the grid.

ε f i v e
ε 0 1 2 3 4
f 1 0 1 2
o 2
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2  1 1
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3
r  4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2  1 1  2 3
u 3  2
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2 2
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2 2 3
r 4
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2 2 3
r 4 3
ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2 3
u 3 2 2 2 3
r 4 3 3 3 3

So we get the same grid as before. This time we didnt have to worry about operations such as add, delete or replace. We just had to look at the values of the adjacent cells and the column and row letters.

Now as we have gone through 3 articles on this topic, we will finish it up with a basic python code which calculates the  ‘Levenshtein distance‘ in the next article.

Exploring NLP 06 – Minimum Edit Distance 4/4

Advertisements

Exploring NLP 04 -Minimum Edit Distance 2/4

In the previous article we looked in to what ‘Levenshtein distance‘ is and an example use case. We saw that when the word we are transforming has small number of letters it is easy to find the minimum edit distance, but as the number of letters in the word increases it is difficult to be sure that we have found the ‘minimum’. So we will be discussing a method that will make sure that we end up with the value for minimum edit distance.

We will take the following example.

five four

From one glance we can see that we need minimum 3 operations to transform five to four (3 replace operations). Lets see how this new method would give that answer. First we have to make a grid like follows. Columns will have the word before transformation and the rows will have the word after transformation. The ‘ε‘  (epsilon) denotes an empty string.

ε f i v e
ε
f
o
u
r

We start from the top and we fill the cells. We read the grid from top to left. For example, to fill the first cell we have to think ‘how many operations does it take to convert an empty string (ε) to an empty string (ε) ?’ The answer is zero. So we put 0 there.

ε f i v e
ε  0
f
o
u
r

Next, to fill the second cell we have to think ‘how many operations does it take to convert a ‘f’ to an empty string (ε) ?’ The answer is 1 (delete ‘f’). So we put 1 there.

ε f i v e
ε  0  1
f
o
u
r

Next to fill the third cell we have to think ‘how many operations does it take to convert ‘fi’ to an empty string (ε) ?’ The answer is 2 (delete ‘f’, delete ‘i’). So we put 2 there.

ε f i v e
ε  0  1  2
f
o
u
r

We can fill the first row in a similar manner and then we get the following.

ε f i v e
ε  0 1  2  3  4
f
o
u
r

Next to fill the first cell in the second row we have to think ‘how many operations does it take to convert an empty string ‘ε‘ to ‘f’ ?’ The answer is 1 (add ‘f’). So we put 1 there.

ε f i v e
ε 0 1 2 3 4
f 1
o
u
r

Next to fill the second cell in the second row we have to think ‘how many operations does it take to convert ‘f’ to ‘f’ ?’ The answer is 0. So we put 0 there.

ε f i v e
ε 0 1 2 3 4
f 1 0
o
u
r

similarly we can keep on going filling.

ε f i v e
ε  0 1 2 3 4
f  1  0 1 2 3
o 2 1 1  x
u
r

One more example, so how do we fill the above ‘x’ ?  We have to think ‘how many operations does it take to convert ‘fiv’ to ‘fo’ ?’ The answer is 2 (replace ‘i’ with ‘o’. delete ‘v’). So we put 2 there.

ε f i v e
ε  0 1 2 3 4
f 1 0 1 2 3
o 2 1 1 2
u
r

Finally we end up with something like this.

ε f i v e
ε 0 1 2 3 4
f 1 0 1 2 3
o  2 1 1 2 3
u  3 2 2 2 3
r  4 3 3 3 3

The ‘Minimum Edit Distance’ (Levenshtein distance) is the value of the last cell.

So we get the same answer 3. For the example I took a small word, but when dealing with longer words this grid method is much useful in making sure we got the minimum number of operations.

Okay so we now know the grid method, but it takes time right ? What if there is a cheat method? a short cut to fill this table ? There is. We will look in to it in the next part, and we will finish up with the python code for this calculation in the article after that.

Exploring NLP 05 – Minimum Edit Distance 3/4

 

Exploring NLP 03 -Minimum Edit Distance 1/4

Even though I was planning on having a nltk tutorial as the 3rd article, I came across the ‘Levenshtein distance ‘ or simply know as the ‘Minimum edit distance‘.  I thought I would share some ideas on that first. Before going to writing codes lets try to see in what scenario do we need this.

Imagine a word processing tool we are using, say Microsoft Word, when we type some words with incorrect spellings it usually underlines it in red or something and then give us the suggestions right ? So how did it do that ?

These use a basic technique where it calculate the similarities of our word with the most frequent similar words with correct spellings. (of cause a tool like Microsoft Word is using much more complex algorithms)

For example if I type ‘supehero’, the engine will ask me something like ‘did you mean superhero’. So what it does is it sees that the ‘supehero’ is not a valid English word, therefore it tries to find out words which are having similar letters, say superhero, superman, supergirl,  etc. Then it calculate Levenshtein distance to each word and then return the word with the least distance. nlp1

Usually depending on the algorithm and the use case the tool will suggest us one word or few word. In the image it suggested one word. So lets see how it does that.

Basically the ‘minimum edit distance’ is the number of operations that we have to do on a particular word in order to transform it to a different word. Okay, first question is ‘what are the allowed operations ?’

There are 3 operations that we can choose from.

  • add a letter
  • delete a letter
  • substitute a letter

Lets see few example.

Operation Word Before Word After Comment
Add a letter ello hello Add ‘h’
Delete a letter heallo hello Delete ‘a’
Substitute a letter helbo hello Replace ‘b’ with ‘l’

In each of the previous examples we can see that we are able to apply a single operation and get the desired word. So in the above cases the minimum edit distance is 1.

Lets see another example

rabbit habit

The m.e.d. is 2 in this case. How ?  First we have to do the substitute operation for ‘r’ -> ‘h’. Then we have to do the delete operation to delete the unnecessary ‘b’.

(One might say it can be done in three operations as well. Add a ‘h’, remove the ‘r’ and remove the ‘b’. Yes that is true. We can do it in various ways. But the Levenshtein distance is the MINIMUM number of operations. Therefore we have to find the least number of operations that we need)

Lets see another example

elephant elegant Distance = 2

(replace ‘p’ with ‘g’. Add ‘h’)

When we look at it there are many ways of doing the transformation. For simple few letter words finding the minimum number of operations are easy. But for longer words like above, how can we be sure that we have found the minimum distance, that there is no other solution which will be able to do this in lesser number of steps ? We will see that in the next article since that will take a bit more explaining than this.

Exploring NLP 04 – Minimum Edit Distance 2/4

Understanding Data 01 – Why is it important ?

Machine learning is not all about mathematical models and algorithms. In fact it can be described as a combination of following three fields.

  1. Data
  2. Feature Engineering
  3. Model

Each of these play a vital role in developing an efficient and effective machine learning system. Lets see why.

Imagine the model as a smart student. And we give him a book to study (in this case the book represents the data+feature). If the content of the book is wrong, or if the content of the book is right but is in a different language which the student can’t understand then even though the student is smart, he will have a hard time learning.

Similarly even if we have a mathematical model which is super smart, if our data and features (how we represent that data to the model) is wrong then the algorithm will most probably fail.

Same goes the other way as well, if the book is correct but the student is stupid then also he will not learn properly. And also even if we have correct data, and a smart algorithm, if our representation of that data (how we feed that data to the algorithm) is wrong then still the system will fail. Therefore to have an efficient as well as an effective machine learning system we have to pay attention to all three above aspects.

Now one might be wondering why do we need the feature engineering (what ever the hell that is) ?   Well we will discuss feature engineering in detail in the coming articles. But for now lets see why.

On the get go we might think okay, I have data, I have the model, why not feed the data directly ?

Well in a handful of special situations we can do that. But in a real world situation things are bit messy. Among the things that can happen here are few;

  1. some parts of data can be missing
  2. some parts of data might be corrupted
  3. some of the data might be duplicated
  4. etc.

And also we have to consider the amount of data. Say we are building a machine learning predictor system for a YouTube like channel, where it can predict what a particular user will be interested in. In such a case we will have Peta bytes of data or more (trillions of records about each video that was clicked by users from all around the world.) Such a large volume of data will include details such as, say, user’s middle name, which is in the data but has almost nothing to do with what he will click when he is on YouTube. Such information are useless in the perspective of the machine learning algorithm. If we feed it such data then it might try to find out relationships between the videos watched and the middle name (which is more or less a stupid thing to do). And also having such huge mount of data, an algorithms might take weeks even months to learn even in hardware accelerated computers.

Thus feature engineering is necessary in order to filter out the necessary parts from the data and then to feed it to the algorithm. This will make the system efficient and also much more accurate.

So we will step by step look in to the aspects of data and feature engineering in the coming articles.

Next on : Understanding Data 02 – Attributes and Values

Stopping the rise of Machines 02 – 3 laws of robotics

Considering the hypothetical yet so much probable doomsday scenario where the human race faces the existential threat due to its own creations, intelligent machines; we started the article series  “Stopping the rise of Machines” to discover things on robotic Armageddon.

Even though an intelligent machine who (which ?) is hell bent on destroying the human race would have so many convenient ways of doing so rather than going in to all the trouble of building Terminator like robots. But that is a discussion for another day. But say it does happen, then what is there to stop them from killing us ?

One of the simplest (arguable) ways would be to have a certain set of rules in the robot’s base code. It wont be morality, it wont be common sense, it certainly would not be goodness. For the robot it will be a bunch of if else statements in its base program on which its entire existence is based upon thus it will have to follow them.

So if I were to create a robot today, then what are the rules which I can implement which will be simple and also prevent the doomsday scenario ? We have to be careful on what we define as rules because unlike we humans the machine will take things extremely literally thus unless otherwise specified it will only focus on the task at hand.

For example if we set the rule

Save me from any kind of harm

This is simple enough. Yes, for us humans. But a machine (probably) will take this command to the extremes, it might try to ‘save me’ from germs and other possibles threats by imprisoning me in a clean room, it might see a pedestrian accidentally going to bump in to me and will ‘save me from harm’ by blasting the pedestrian to kingdom come, it might predict all the stress that I might get due to work or interaction with people and decide to ‘save me’ by eliminating those people.

So we see that even with a simple rule, things can go wrong. With that in mind, what are the rules we can focus on. Well there are many, but most famous is the  Issac Asimov’s 3 laws of robotics (Actually its 4)

Issac Asimov was a science fiction and science fact writer and in his ‘I, Robot‘ books; collection of nine stories about ‘positronic’ robots ; he lay out the “Three Laws of Robotics”. (Later he added a “zeroth” law, designed to protect humanity’s interest. )

1st Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2nd Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

The question is, with these or similar set of rules will we be able to make sure that machines will not turn on us ? Well, we have to go beyond the comparison of machines to humans, we have to focus on the problem with the mind set that the concepts such as ‘morality’, ’empathy’, ‘compassion’ or even ‘anger’ are out of the question. We cannot and shouldn’t say or believe that as machines get intelligent they will become benevolent. *Come on, how many intelligent people we know who are total jack asses). Intelligence doesn’t make a person good or bad, so it wont suddenly make machines ‘goody two shoes‘.

There is this question also, since the machine would be running a program, since a program has to follow its code, and since we are going to put these rules as conditions, doesn’t that mean that the machine will always have to follow them whether it wants to or not ? Yes true, but that is assuming that the machine is not intelligent enough to find a loop hole, or to bypass the specific set of code, or to rewrite its own code. If it is intelligent enough to do so, then the approach of putting rules in machine base code will not do much good to us, would it ?

Next : Stopping The Rise Of Machines 03 – (to be decided)

 

Turing Test – How Turing Described

It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be

My hair is shingled, and the longest strands are about nine inches long.”

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I
am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question,

What will happen when a machine takes the part of A in this game?

Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?

 

Death of Death

The artificial intelligence is not a straight road with one destination, it has branched in to so many areas which makes artificial intelligence one of the most complex subjects that there is. When starting to read on the subject it is so easy to get distracted by all the amazing topics and concepts and it happens so often to me.

While reading a book on AI I was distracted by some concept which lead me to another concept and then another and  so on and finally I came across the concept of ‘Death of Death‘. Of cause its not a new thing, we have seen death of death in different contexts, mostly in TV shows or films. In Supernatural TV show they literally kill death. But we are not talking about the death as an entity but more in the lines of immortality.

If you have watched the not so long ago movie ‘Transcendence‘ then this concept is almost similar to it.

Now in my readings I came across an argument where it states that ‘Intelligence cannot be artificial‘. Whether its a human animal or a machine, if its intelligent then it is intelligent, there is nothing called artificial. May be its true, may be as humans its is our ego that’s stopping us from accepting that there can be more intelligent entities than us. Any way, if we give it some thought, what is actually ‘Artificial’ in relation to intelligence ?

I am intelligent, you are intelligent and if there is another entity which can do everything thing you and me can do then isn’t it intelligent ? How is it’s intelligence differ from us ? what makes its intelligence artificial and ours natural if all of us have the same capabilities ?

Is it because we use our flesh and blood brain and it uses electronic circuitry ?  Then doesn’t that mean its the body that’s artificial, not the intelligence ?

In a future article we will discuss it further, I still have a lot to think about it and figure out which is which. In the mean time lets say that intelligence is intelligence, regardless  whether its generated on wetware or hardware.

Now back to the topic. The technology is rapidly growing and many notable people are concerned about the technological singularity. The book ‘rapture of the geeks‘ scares the hell out of us providing warnings of the impending doom.

But what if the singularity isn’t about a machine/program going super-intelligent, what if its about we being able to upload our consciousness to a computer ? The concept of ‘Death of Death‘ in technological terms takes in to account the fact that one day the technology will be able to create two things.

A compute/machine which is complex and durable enough to hold human consciousness forever.

And a way of extracting consciousnesses from our biological body and transferring it to this machine.

If we are able to achieve these two then we can strip away our dying bodies and live forever, essentially marking the death of death.

So can this be achieved ? If so would we be in movie Matrix kind of a world minus the machine overlords and the incubating physical bodies ? Our uploaded consciousnesses interacting with each other on a virtual world forever ?

(Personally that would suck for me. We live we die we live again. Having the same life forever ? all the regrets, enemies, heartaches forever? No thank you.)  Anyway I don’t think that we will come to the point where we can cause ‘death of death‘. Sure we will build machines complex enough to hold consciousness forever (at-least theoretically) but we will not be able to transfer our consciousness, why ? because its not a tangible thing. Its not like monitoring and copying brainwaves, we humans (apart from the spiritually awakened) still have no clue on what or where the consciousness is.  Sure, we have a whole bunch of theories, but that’s all they are, theories.

So even  though many think it would be cool to live forever by uploading our consciousness to machines, I don’t think the ‘Death of Death‘ will ever come to pass.

 

Fake it till you make it approach in AI

The name ‘Turing Test’ is known by everyone in artificial intelligence field. It is considered one of the benchmarks of deciding a machine’s intelligence. Every year ‘Loebner Prize‘ which offers a prize of $100,000 and  solid 18 carat Gold medal to the program which can pass the Turing Test. But this article is not there to talk about the Turing test but to consider the impact it has on our approaches to solving the problems of intelligence.

So what is Turing Test. (See : Original Version of Turing Test)

In simple; there is a machine(program) and there are a board of judges in separate rooms, they can communicate through typed media (types messages will be passed). If the machine can convince the board of judges that it is a human being more than 30% of the time then it will pass the Test.

For the first time, in 2014 a program was able to achieve this task. From BBC news

A computer program called Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading.

On 7 June Eugene convinced 33% of the judges at the Royal Society in London that it was human

In 2016 a MIT group made a program which has the ability to beat the Turing test for sound.

An MIT algorithm has managed to produce sounds able to fool human listeners and beat Turing’s sound test for artificial intelligence. (source)

Artificial intelligence has broken through a sound barrier. Researchers from Massachusetts Institute of Technology have developed an AI system that “watches” a silent video clip and generates a sound so convincing that most human viewers cannot tell whether it is computer-generated. (source)

I think these are incredible achievements. Bit by bit we are gaining knowledge on building smarter machines.

When we take a step back, one could argue that the working towards passing the Turing Test bench mark might have caused us to loose focus on some other aspects.

First the Turing Test is focused on Human intelligence. But as much as we like to think otherwise, we are not the only intelligent species on the planet. For example the intelligence in insect colonies are mind blowing. For cry not loud, we see how intelligent our pets are everyday. So by focusing on Turing test we are kind of restricting the search for intelligence.

The second fact is that by focusing on Turing test we are essentially more focused on machines which are capable of deceiving the judges. We are trying to create machines which can give the ‘illusion’ of intelligence, instead of finding ways to build machines which are truly intelligent.

This doesn’t mean that focusing on Turing test hasn’t yielded results. Of cause it has, so many developments. For example; the above mentioned program which broke the Turing test for sound has many applications.

“A robot could look at a sidewalk and instinctively know that the cement is hard and the grass is soft, and therefore know what would happen if it stepped on either of them,” he said. “Being able to predict sound is an important first step toward being able to predict the consequences of physical interactions with the world.” (source)

So like I said it on the title, ‘faking it till you make it‘ works; we just have to not loose focus of our primary goal; true intelligent machines.

The Imitation Game – Original Version of Turing Test

When we talk about how would you decide if a machine is intelligent, then the first response that comes to mind is ‘can it pass the Turing test?

But when we go back and see the historical facts, Alen Turning did not specify the exact version we are using today. When he was confronted with the question ‘Can machines think ?‘ he believed the question was too vague therefore proposed a game instead, ‘The Imitation Game

The scenario is as follows.

There are three people in three separate rooms. A man. A woman. An interrogator (gender doesn’t matter).

None of them knows each other’s gender. Both woman and man can communicate only with the interrogator and communication happens through typed medium (no voices). Typed messages are passed between the interrogator and the participants.

The interrogator ask questions from the participants and the goal of the participants is to convince the interrogator that he/she is the woman. So the woman will answer naturally but the man will have to act as a woman in order to convince the interrogator.

Now what Turing proposed is;

if we replace the man with a machine, a machine who can imitate a human well enough that the interrogator at least 30% of the time on average cannot recognize that he/she is talking with a machine; then the general people would be inclined to believe it as a thinking machine.

(30% is because if the interrogator guessed randomly he would have guessed its a machine 33% of the time since it can be only one of three choices; man,woman and machine)

There are some issues which Turing hasn’t specified such as whether the interrogator should be aware that one of the participants can be a machine, or whether he would/should continue to believe there are only a man and a woman. But that is a topic of discussion for another day. I just wanted to share the original version of the the Turing test.

Note: The test in Turing Own description goes like this. Turing Test – How Turing Described

We will take a closer look at Turing Test in : Fake it till you make it approach in AI

 

Stopping the rise of machines 01 – Warnings

I have a theory, the humanity will face a deadly biological virus which will someday bring the Earth back to its balance. The nature has always found a way to even the scales and it is over due now. 7.4 billion people and still rising, its like we have taken this earth as our own and disregarding all the other living entities. Well, what nature decides we can’t stop most of the time so lets not talk about that. Instead lets talk about another probable scenario where the man kind faces one of its greatest existential threats. The super intelligent machines turned against man kind. The moment when we can’t control our own creation which is thousands fold smarter than us and keep increasing in intelligence exponentially. How do we fight such a mighty foe ? Well, we can’t. It would be the end of the most dominant biological species on the planet.

If you are thinking ‘hey, this dude is crazy, he must have watched too many science fiction stories‘. Well fair enough. I am nobody. But that doesn’t mean my fears are invalid. Lets check what some of the most brilliant people living today has to say.

The open letter ‘RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE‘ was signed by the greatest minds of our generation and is directed to the ‘Future of Life Institute‘ urging the world think ahead and warning everyone of the probable doomsday that will come upon man kind if we are not cautious. Just take a look at the first dozen or so names in this list. These people are the giants of the AI field and when they say we need to be worried, then we better be worried.

open-ai-letter

When Stuart Russell, Yann LeCun, Geoffrey Hinton and Peter Norvig agrees, its hard to throw away as science fiction.

Stephen Hawking said one time;

“A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Elon Musk is also warning the world

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Bill Gates agrees

“I am in the camp that is concerned about super intelligence, First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

For cry not loud, these are the guys whose companies are the dominant forces in AI research, so when they say things we have to pay attention, they know what they are talking about as they are at the frontier of current AI advances.

Of cause the research on AI isn’t going to stop. Machines are going to become smarter and smarter and we can’t stop it, some where some one will continue the research. We cannot ban the research on AI because the positive things that humanity and the world as a whole has gained through AI are immense and its continuing to play a vital role in making our world a better place. So the idea is to be aware of the risks and take necessary precautions and if possible not to cross the points of no return.

We will discuss more on Stopping the rise of Machines 02 – 3 Laws of Robotics