Sunday, April 28, 2019

Can you tell me why are you slow?

Have you considered this question ever in the context of a human being talking to another human being - of course you might have at some point in the time of your life journey or maybe not but may consider it in future but today's discussion is not about that.

So what are we talking about? we are looking at this same question triggered from a human to a machine or an application or an operating system.

In my earlier posts - I might have pointed out that the operating system hardware/software might not have undergone any radical changes to make it significantly differ in the way it interacts with the humans in lieu of technologies available today. We will talk about 2 topics -

1. Conversational error detection.
2. Operating system architecture (top level plausible components).

Imagine you are starting your day with a cup of coffee fully woken up & in the top state to finish your pile of plate with an accelerated pace and suddenly you get slowed down by the system not responding for some reason. In general sense now your whole focus shifts to remediation and you start closing screens, windows to get it working faster. Imagine if you didn't have to do that and you could just ask the system - 'Could you tell me why you so slow?' and the system would in turn - analyse and respond the top 3 reasons for it's slowness along with corresponding remediation action to which you could then respond to saying - 'Okay, let's try #1 or #2 or #3.' - wouldn't that be lovely?

So let's see what is needed to reach to the above stage:

  •  As of today most of the investigative activity is done by humans capturing and applying contexts to data to connect flows which make sense logically. 
  •  Contexts would come later in the game but the first comes data. The data flowing from one application to another and running through the operating system should be able to set the right footprint in order to be investigated. 
  • A change in perspective needs to happen, currently data footprint is created in a way - humans can interpret, so additional logging or accurate labelling needs to happen and this should happen in the core system as well the applications supported by it. 
  • One this is complete data can be correlated across a given context, the system should be able to fetch the context in question, apply the labelling to it & correlate to get the data, for example - why is the system slow is one aspect, another being - why is this application slow? both have different contexts for correlation. 
  • Now once the correlation is complete - application of algorithm has to happen to decide what is causal on the given event and what are the resultant action outcomes forecasted. 
  • This can be done via applying learning algorithms to operating system to achieve the resultant best forecast. 
  • This would in turn provide the result which are the reasons & plan of action.


So we got the set of steps on the software bit but another challenge is the hardware & framework - current operating system hardware frameworks might not be adequately suited for AI.

The chips are based on traditional concepts of a central processing unit, which was the best way to design nerve centre of computer systems, now with the advancement in AI - there should be some thought given to creating a separate chip for AI, this would help to decouple any issues arising out of a virused AI unit acting negatively. Similar to this - security should be a separate chip introduced into the hardware hence keeping it separate from being attacked or hacked.

So in short we are talking of a CPU along with AI chip, Security chip & Graphics interface to make the complete underlying hardware to enable the software to support advanced NLP, machine learning & AI capabilities in an OS. Now that may be an initial thought but still there is long way to go..



Tuesday, April 16, 2019

Some numbers & ways an organisation can pace towards ingraining intelligence to their fabric..

To begin with, a lot of talk is currently in progress citing importance of thinking in the direction of AI adoption. Nearly all the major technology disruptions in progress be it intelligence gathered via big data, internet of things, blockchain, cloud computing or flow based pattern recognition systems talk of their advancement or path towards collaborating in an Artificially Intelligent environment.

Question is why so let's talk about some numbers here.. 


As per surveys attempted by prominent survey giants 47% of firms are talking about adopting an AI focussed roadmap starting with embedding at least one AI capability in their business processes.

Out of these 20% are using AI in their core part of business processes.

30% are looking at piloting AI in one way or another over the course of next year or so.

Current spending although holds around one tenth of budget for 58% of the firms adopting AI but this is expected to grow in next 5 years to 71%.

Most of the firms use AI in marketing and sales (52%).

Value generation quotient in AI has been prominent in manufacturing & risk analysis - 41% reporting significant value & 37% percent  in marketing and sales reporting moderate value.


Primary areas of inception being robotic process automation, nlp & machine learning.

With all these numbers, question is how to start progressing in the direction adoption?


The first approach should be to think differently and move away from a siloed function centric mindset towards an integrated process centric mindset.

In order to do that, establish data points across systems to provide more streamlined flow of data across business units.

Establish a more ingrained process centric view which breaks down the approach into adoption areas based on adoption strategies for short and long term.

De-prioritize functions in favour of processes.

Think on purchasing AI services.

Processing & data management is key so approaches which allow for enhanced processing and storage management of data should be a priority.

Training data is key and initial training would need specilists supervision.

Analytics forms a key data entry point in any AI process adoption - so get your analytics right!

Although the adoption process is a long journey but above factors might be able to give a start to that path..

References & further reading - Mckinsey Reports, Forbes & O'rielly articles - AI adoption.

Saturday, April 13, 2019

Conversations interface - key points while architecting & updating versions..

Conversations are an important topic when it comes to designing conversational interfaces, the key challenges here are -

1. Each platform has their own framework to handle conversations.
2. Being an emerging area of research - platforms upgrade often on in ways which might seem as deterrent for product teams to embrace, hence slowing down pace of adoption.

Although, there might be other factors depending on area of induction or technology used but above 2 are most common.

In this post, I am going to cover with the help of dialog flow as a conversational platform - some techniques which might be useful to streamline this change process quicker.

Each conversation architecture has some key components during conversation journey -

1. outgoing message interface - conversation prompt - modular input based on multi-modular spectrum - with connected devices.
2. conversation closing interface
3. conversation contexts
4. locations interface (this can include geographical & physical address)
5. storage interface (or caching)
6. service invocation interface
7.  intent handler interface.
8. account linking & transactional interface
9. permissions interface.

The above interfaces form as key architectural units - the implementations can change but the essential point here is that even if a platform is going for an upgrade, your application framework & logic is decoupled with no impending impact.

There are multiple articles which cover migration strategies when migrating from any conversational platform like dialog flow v1 to dialogflow v2.

For e.g. - switching from tell to close or creating a suggestions object or permissions object via an available class rather than an assisted function or using promises to handle backend service calls.

A good article which covers this in the context of google is given below for reference -
https://medium.com/google-developer-experts/migration-points-for-upgrading-to-actions-on-google-nodejs-version-2-4640648ab8b5

Apart from this google has a bunch of documentation to support this (like the one here - https://dialogflow.com/docs/reference/v1-v2-migration-guide) but application development should be completely decoupled from the fact that any of the third party API can change and might not be backward compatible.

I am expecting a common framework to come into picture which establishes a convention to use conversational interfaces in the near future but the above core areas are fundamental to start with.