A lot of conversation today is driven by the fact that AI would be the genie providing a new miracles in every field and then there is the thought that AI will act in controlling and disruptive ways.
At this point though - it might be evident that it will be none of the above just when operating alone.
What do I mean by this?
Before we answer that let's understand why AI is different - as an approach.
Problem solving has always been a key factor in understanding logic, each problem has some inputs, a result and a method to reach to that result.
Traditional computation provides a means to reach to a result via following a logic - A.
In human learning - the idea is to understand the concept and find the similarities between a problem A & problem B so that you can break the construction of the logic in a way to find a solution which works for both problems A & B and maybe another set of problems on similar note.
So is the case with AI, the inputs & outputs are present the computer decides the best way to take the approach to find a solution and then apply the solution when new inputs are present.
Well, here comes the part which is interesting - because in order to find a solution - the system needs to ascertain a way which in turn needs data in terms of training to get to a solution. Real world scenarios provide data & humans would provide the confirmation that a set of results match to problem in question as output.
So in short - the intelligence is being driven by the capacity of the human decisioning. This is an important point, as long as this is controlled with the right perspective, the results would always be consistent and known. They can be optimised and creative but wouldn't be in any way disruptive, unknown or surprising, unless manipulated to do so.
Question is who can do that - well, it's got to start from humans supervising the models at some point but once trained the models can themselves start acting in ways which might not be in line with the human operating realms. Hence - there is a need for understanding how to build models which can act as guards identifying a pattern of source of issue, the whole supposition is - if you can control it - you can prevent it. Damage control reversal in learning models is long process hence the need for control, removal of affected neurons - retraining, validation & retrofitting back into the intelligence chain, again monitoring goes hand in hand with how much you want to monitor over a period of time.
So - in short - both human and 'AI' have to work together here at least while the structure is completely set up, which a long time itself.
Currently as we see a lot of benefits & examples of this in fashion industry, retail space, healthcare & space research.
The trends are seeping in very fast - players big and small have at least got a model or framework to create learning sets - amazon's got amazon's sage maker, google's got - deep mind & api.ai, Facebook got FAIR & wit.ai, Apple's got ML engine init.ai, IBM - watson, Intel's got api.ai, Microsoft Azure ML & Oracle AI apps paler & crosswise.
Also the learning has been segmented and a lot of learning models are evident in supervised, unsupervised & reinforcement learning spaces.
For e.g. for supervised learning model - there is classification & regression, in the unsupervised space there is clustering.
The question is each of these learning methodologies need to have adequate methods to detect favourable from in-favourable learning.. that's a road which would take some time to get to.
At this point though - it might be evident that it will be none of the above just when operating alone.
What do I mean by this?
Before we answer that let's understand why AI is different - as an approach.
Problem solving has always been a key factor in understanding logic, each problem has some inputs, a result and a method to reach to that result.
Traditional computation provides a means to reach to a result via following a logic - A.
In human learning - the idea is to understand the concept and find the similarities between a problem A & problem B so that you can break the construction of the logic in a way to find a solution which works for both problems A & B and maybe another set of problems on similar note.
So is the case with AI, the inputs & outputs are present the computer decides the best way to take the approach to find a solution and then apply the solution when new inputs are present.
Well, here comes the part which is interesting - because in order to find a solution - the system needs to ascertain a way which in turn needs data in terms of training to get to a solution. Real world scenarios provide data & humans would provide the confirmation that a set of results match to problem in question as output.
So in short - the intelligence is being driven by the capacity of the human decisioning. This is an important point, as long as this is controlled with the right perspective, the results would always be consistent and known. They can be optimised and creative but wouldn't be in any way disruptive, unknown or surprising, unless manipulated to do so.
Question is who can do that - well, it's got to start from humans supervising the models at some point but once trained the models can themselves start acting in ways which might not be in line with the human operating realms. Hence - there is a need for understanding how to build models which can act as guards identifying a pattern of source of issue, the whole supposition is - if you can control it - you can prevent it. Damage control reversal in learning models is long process hence the need for control, removal of affected neurons - retraining, validation & retrofitting back into the intelligence chain, again monitoring goes hand in hand with how much you want to monitor over a period of time.
So - in short - both human and 'AI' have to work together here at least while the structure is completely set up, which a long time itself.
Currently as we see a lot of benefits & examples of this in fashion industry, retail space, healthcare & space research.
The trends are seeping in very fast - players big and small have at least got a model or framework to create learning sets - amazon's got amazon's sage maker, google's got - deep mind & api.ai, Facebook got FAIR & wit.ai, Apple's got ML engine init.ai, IBM - watson, Intel's got api.ai, Microsoft Azure ML & Oracle AI apps paler & crosswise.
Also the learning has been segmented and a lot of learning models are evident in supervised, unsupervised & reinforcement learning spaces.
For e.g. for supervised learning model - there is classification & regression, in the unsupervised space there is clustering.
The question is each of these learning methodologies need to have adequate methods to detect favourable from in-favourable learning.. that's a road which would take some time to get to.