Let's start with --> Deep Learning and looking at Ray's perspective
According to Ray K, basically artificial intelligence is based around below 2 fundamental axioms:
1) Many layer neural nets
2) Law of accelerated returns.
So in terms you can have neural nets - a) do the job for you & b) do the job better by taking output from one neural net and feeding it as input into another.
Currently as it stands - it is estimated that the intelligence level quotient of a machine stands between simulation of one insect brain to one rat brain (we are still not there yet - hard to believe but it is true!).
But having said that - it's not long enough until machine would be able to simulate not only a rat's brain but one human brain - in another 10 years and nearly all human brains by another 40 years.
This is the growth curve - mainly based on the 2nd axiom of - law of accelerated returns.
Now let's talk about the journey here which is a bit complicated.
Cognitive science has always been the subject of interest to many not from today but from late 1800 and early 1900's, study how subjects think.
There are two things that play a key role to ascertaining a cognitive behaviour and having a correct result -
1) Context interpretation - which get's derived from the learning plus training.
2) Derivational behaviour - which decisions on an unknown based on confidence established in the known.
The 2) one is a bit difficult to achieve until the machine is fully capable to interpreting the context with close to 98% percent probable correctness.
Let's take a simple example to understand both the above points -
Consider an image wherein a group of people are playing cards in a circle and 2 people are smiling at each other, some are thinking and others are confused.
If i look at the picture as a human - i might be able to gauge that the person next to the player is having glasses which give the card details out to the one person in front, which makes them smile as they know what move to make next, and the others are thinking about what move will be made as they don't get to know the card and some are looking at these people and are confused.
So lot of things happening here - and the human mind is capable of -
a) analysing the emotional quantum of each person
b) obtaining a decision what kind of emotion is conveyed
c) connecting the emotions to find deeper insights on the picture, like the glasses worn by the person.
d) connecting the interpretation of what it means to the person sitting in front.
e) negating the emotion of collective smile as opposed to the single person smiling and then
f) deriving a state out of picture.
So this is just a small example of how simple images can be interpreted correctly - image is nothing but a glimpse of a state at one point in time.
As of today - a) & b) might be possible to some extent but c), d), e) & f) is a whole another gamut which has yet not been explored.
An image of this nature will basically lead to the following label - 'at best' - 'A group of people playing cards' or maybe if not at best - 'A bunch of people sitting on a round table'
Now imagine this being a video instead of a picture which makes it more hard to examine because we have put the time in picture and need a to have a vaster neural net to interpret correctly.
What I just talked about above can be summed up as a mix of context interpretation and trained behaviour analysis.. the 2nd part which is basically talking about derivational behaviour is more around the lines of decisioning for which step 1 is context interpretation, say for e.g. - I there is a child which learns to walk and go around a path notices some objects and before seeing the object again knows about there being an object at place and takes a decision to change it's course.
Well - what i just talked about above leads to developing architectures in AI which lead to cognitive deep learning.
One such architecture can be based around Bayesian model of probabilistic models of cognition, one of the all time topics of interests in Josh's deep learning approaches.
But in simple terms these can be broken down into the below -
Visual Stream --> Learning --> Training --> Cognitive Analysis --> Behaviour --> Result --> feedback to learning.
Well - we talked about deep learning and cognition and discussed architectures around it.. so why are we doing this.
Let's imagine a simple futuristic scenario where I am interested in simple things like for example - I have bot which help me with my mails - it goes through my mails and gives me mails of interest which have a strong intent of action or maybe I have a small robo which adapts to my daily home chores and tries to do the same when I am not there or a program which understands the functional need and derives the best possible code based on the need and probability of that need in the near future.
Well - this are just some very very simple examples of what AI can do for us.. but we still have a long way to go to even get to this level.
In our next topic - we will talk about cops - that's AI governance - and the need for a goverance model, along with singularity and ethical ai.
According to Ray K, basically artificial intelligence is based around below 2 fundamental axioms:
1) Many layer neural nets
2) Law of accelerated returns.
So in terms you can have neural nets - a) do the job for you & b) do the job better by taking output from one neural net and feeding it as input into another.
Currently as it stands - it is estimated that the intelligence level quotient of a machine stands between simulation of one insect brain to one rat brain (we are still not there yet - hard to believe but it is true!).
But having said that - it's not long enough until machine would be able to simulate not only a rat's brain but one human brain - in another 10 years and nearly all human brains by another 40 years.
This is the growth curve - mainly based on the 2nd axiom of - law of accelerated returns.
Now let's talk about the journey here which is a bit complicated.
Cognitive science has always been the subject of interest to many not from today but from late 1800 and early 1900's, study how subjects think.
There are two things that play a key role to ascertaining a cognitive behaviour and having a correct result -
1) Context interpretation - which get's derived from the learning plus training.
2) Derivational behaviour - which decisions on an unknown based on confidence established in the known.
The 2) one is a bit difficult to achieve until the machine is fully capable to interpreting the context with close to 98% percent probable correctness.
Let's take a simple example to understand both the above points -
Consider an image wherein a group of people are playing cards in a circle and 2 people are smiling at each other, some are thinking and others are confused.
If i look at the picture as a human - i might be able to gauge that the person next to the player is having glasses which give the card details out to the one person in front, which makes them smile as they know what move to make next, and the others are thinking about what move will be made as they don't get to know the card and some are looking at these people and are confused.
So lot of things happening here - and the human mind is capable of -
a) analysing the emotional quantum of each person
b) obtaining a decision what kind of emotion is conveyed
c) connecting the emotions to find deeper insights on the picture, like the glasses worn by the person.
d) connecting the interpretation of what it means to the person sitting in front.
e) negating the emotion of collective smile as opposed to the single person smiling and then
f) deriving a state out of picture.
So this is just a small example of how simple images can be interpreted correctly - image is nothing but a glimpse of a state at one point in time.
As of today - a) & b) might be possible to some extent but c), d), e) & f) is a whole another gamut which has yet not been explored.
An image of this nature will basically lead to the following label - 'at best' - 'A group of people playing cards' or maybe if not at best - 'A bunch of people sitting on a round table'
Now imagine this being a video instead of a picture which makes it more hard to examine because we have put the time in picture and need a to have a vaster neural net to interpret correctly.
What I just talked about above can be summed up as a mix of context interpretation and trained behaviour analysis.. the 2nd part which is basically talking about derivational behaviour is more around the lines of decisioning for which step 1 is context interpretation, say for e.g. - I there is a child which learns to walk and go around a path notices some objects and before seeing the object again knows about there being an object at place and takes a decision to change it's course.
Well - what i just talked about above leads to developing architectures in AI which lead to cognitive deep learning.
One such architecture can be based around Bayesian model of probabilistic models of cognition, one of the all time topics of interests in Josh's deep learning approaches.
But in simple terms these can be broken down into the below -
Visual Stream --> Learning --> Training --> Cognitive Analysis --> Behaviour --> Result --> feedback to learning.
Well - we talked about deep learning and cognition and discussed architectures around it.. so why are we doing this.
Let's imagine a simple futuristic scenario where I am interested in simple things like for example - I have bot which help me with my mails - it goes through my mails and gives me mails of interest which have a strong intent of action or maybe I have a small robo which adapts to my daily home chores and tries to do the same when I am not there or a program which understands the functional need and derives the best possible code based on the need and probability of that need in the near future.
Well - this are just some very very simple examples of what AI can do for us.. but we still have a long way to go to even get to this level.
In our next topic - we will talk about cops - that's AI governance - and the need for a goverance model, along with singularity and ethical ai.
No comments:
Post a Comment