Sunday, March 31, 2019

The concept of machines as assistants..should we fear or should we embrace?

A lot of conversation today is driven by the fact that AI would be the genie providing a new miracles in every field and then there is the thought that AI will act in controlling and disruptive ways.

At this point though - it might be evident that it will be none of the above just when operating alone.

What do I mean by this?

Before we answer that let's understand why AI is different - as an approach.

Problem solving has always been a key factor in understanding logic, each problem has some inputs, a result and a method to reach to that result.

Traditional computation provides a means to reach to a result via following a logic - A.

In human learning - the idea is to understand the concept and find the similarities between a problem A & problem B so that you can break the construction of the logic in a way to find a solution which works for both problems A & B and maybe another set of problems on similar note.

So is the case with AI, the inputs & outputs are present the computer decides the best way to take the approach to find a solution and then apply the solution when new inputs are present.

Well, here comes the part which is interesting - because in order to find a solution - the system needs to ascertain a way which in turn needs data in terms of training to get to a solution. Real world scenarios provide data & humans would provide the confirmation that a set of results match to problem in question as output.

So in short - the intelligence is being driven by the capacity of the human decisioning. This is an important point, as long as this is controlled with the right perspective, the results would always be consistent and known. They can be optimised and creative but wouldn't be in any way disruptive, unknown or surprising, unless manipulated to do so.

Question is who can do that - well, it's got to start from humans supervising the models at some point but once trained the models can themselves start acting in ways which might not be in line with the human operating realms. Hence - there is a need for understanding how to build models which can act as guards identifying a pattern of source of issue, the whole supposition is - if you can control it - you can prevent it. Damage control reversal in learning models is long process hence the need for control, removal of affected neurons - retraining, validation & retrofitting back into the intelligence chain, again monitoring goes hand in hand with how much you want to monitor over a period of time.

So - in short - both human and 'AI' have to work together here at least while the structure is completely set up, which a long time itself.

Currently as we see a lot of benefits & examples of this in fashion industry, retail space, healthcare & space research.

The trends are seeping in very fast - players big and small have at least got a model or framework to  create learning sets - amazon's got amazon's sage maker, google's got - deep mind & api.ai, Facebook got FAIR & wit.ai, Apple's got ML engine init.ai, IBM - watson, Intel's got api.ai, Microsoft Azure ML & Oracle AI apps paler & crosswise.

Also the learning has been segmented and a lot of learning models are evident in supervised, unsupervised  & reinforcement learning spaces.

For e.g. for supervised learning model - there is classification & regression, in the unsupervised space there is clustering.

The question is each of these learning methodologies need to have adequate methods to detect favourable from in-favourable learning.. that's a road which would take some time to get to.


Sunday, March 24, 2019

Understanding the core function of any product...an opportunity for machines & humans to work together!

It's been a long time since companies have been developing products & services frequently consumed  by a wide gamut of population.

Some of these firms are leaders in their field of innovation, they set the pace for others to follow by breaking technological myths and setting up new dimensions & perspectives for the general consumer to use the product.

It becomes crucial for these firms to make sure that the critical or core function of the product still works without any issues and it needs to be completely fail safe.

However, it has been seen in past few days - that the critical core feature hasn't lived up to its expectations. There can be various reasons for this - some of them might be as follows -

1. core feature identification was flawed or skewed.
2. the fail scenarios were not properly evaluated.
3. third party issues which lead to the critical feature stop responding or responded erratically(although this is a virtual scenario & I will explain why in the next paragraph).

What am I talking about here?
Let's take an example to understand - let's say a popular mobile phone manufacturer makes an awesome mobile phone, it's tech trendy, quality proof, secure - now the manufacturer replaces a function which was originally mechanically controlled via a digital substitute - let's say - touch or haptic feedback.
Sometimes firms go forward with taking bigger risks like eliminating the mechanically controlled control point completely.  This is a perfect scenario for innovation but the major question here is -

a) is that mechanically controlled feature - a core functionality? how do we answer this? - best way is to invert the question and answer what if that feature is removed completely - can the device function without it? if no, then the answer is yes -  if answer is 'yes', then without question it 'has to work always'.
 - in the above example - this might be the 'home' button on the mobile device.

So we answered the first question(although I am sure, there will be well educated minds challenging the basic response to this common man question and more hillariously, justifying it too) - let's move on.

So what happens next - this core feature should work always - in both cases of upgrade - 1) changing the technology from mechanical to digital 2) removing the button completely - there should be enough scenarios to evaluate both the possibilities scenario set for failure. Also there should be third party integration evaluation so it is confirmed that the button does work all the time.

So here testing plays a major role, and the firms focus should shift to active testing not just by using automated methods but engaging vast number of scenarios. Question is how do we do that?

This brings up an interesting concept into focus - experience feeds in a lot of test cases - as long as both 1) & 2) have had sufficient time periods of being in existence and have had usage patterns which have been unique.

Then there is testing for future products in development but being launched in the 1 year timespan of the feature switch. The concept which helps here is let the humans and computers work together to perform this part of the product cycle validation. The possibilities here can be new found like using an AI engine to evaluate scenarios and mix & match them and test them throughly or have a third party dumb component created via an AI software to simulate a test which would allow the feature to be switched to a fallback mode and would test it's fallback coherence in that scenario.

Third party issues are ones wherein nobody takes the blame unfortunately there is nothing like third party issues, if the product is owned by a firm - it's not owned partially and every third party issue which is core to functionality should have a fallback built in so the third scenario is basically a virtual one.

Well, why so much of a hustle - because it seems one fine day when unfortunate customer 1 who was using the mobile device(via car play) with so much trust on the company making it - that he/she spent their savings to get it, couldn't activate the home button during driving to help with navigation even with voice failing to capture any command and had no option but to stop and restart the device, which might have costed them an accident or it might have gone worse.

Still many would disagree this is a core feature but the essence of the story here is when it comes to core feature identification more thought needs to be given to given current product standards.

In case if anyone is interested and might not have guessed - the product in above example was iPhone with latest software version installed(but that's not important as the above detail is..).