Monday, December 30, 2019

Customer experience... the key differentiator of today's business..

Lot of times we have this answer available to us taught from the beginning of our technical product journey - 'customer experience should be great, customer is the key, and so on & so forth. Sometimes the words are so much imbibed in us that the essence of the focus drifts away.

Let's examine customer experience in today's world from a real world example to answer this simple question.

What's customer experience to me as a consumer? Let' take an example of company A & company B

I purchased some goods over internet - on a website, which allowed me to get it delivered to my address.

Is delivering as quickly  during holiday's a part of customer experience? Yeah - maybe, but unfortunately my package though delivered on time never reached me.

Now anyone would expect - if the package is having n number of ways to let a customer know that it was tracked from point of fulfilment to the point of delivery, it would be enough to give a good resultant experience to the user - yeah right but everyone does that today. What about the little things?

What are those little things? - package not delivered but tracked perfectly - is that a little thing? maybe - how to answer this question? ask - what would a customer do - if the package say's delivered but is still not there?  well, would they call the company, would they email, would they chat, would they talk to their voice assistant device - maybe? in fact - holiday season - i might not have the luxury of time, I might be moving, I might be just close to a shop from where I can get the stuff so how would I change this experience of a customer who ordered a product, that didn't get delivered, when the person was on the move and precisely calculated the delivery - now you are very close to completely spoiling the customer experience factor & your advanced tracking notification tools act in a way to go there quicker.

Aah - so not seeing my package but getting notification of delivery, I go to the site & try to give a feedback of delivery - typing in my issue - aaah again I think I am wasting my time here - who will see this?

But wait - this company already thought of something, which can turn my experience, what was it? any guesses? - 2 things - simple stuff -
a) attach a photo of the delivery with the packet
b) provide an educated choice list of response for feedback - instead of typing the details & yo - it knows the choice perfectly.

what did this do? - a) --> got me looking into the picture and say 'hey' that packet is delivered to the door infront of my house, so I got my package on time. (Delivery - on time - ticked).
b)  gave me a mechanism to let the company know what went wrong so that it can improve delivery as a result CSAT.

Well, this was the simple thing here - connecting it voice enabled device which told me I have a delivery pic or gave me easier way for feedback would be the plus on this but job well done.

Now - let's take another example of a company B, this company B is the company whose product I recieved in the package via company A's fulfilment services.

While using the product - I see company B launched a new feature on the device it sells, you can type  quicker by just moving your fingers on the keys - no need to press em, impressive - though so not so much as this was a copied idea from some other competitors but adopted late.

I like company B's products as they are elegant and secure so I think maybe this feature was done perfectly but gradually I realise swipe type is a different animal altogether as I see the magnification feature which would have helped me do the same quickly was removed from the companies features because of cost cutting and performance. Now as a customer & layman user - my experience is already a bit on the negative side for this company -

a) overlooked a key feature which was a differentiator
b) copied without any improvisation a feature which was already present in other brands.

completely loosing both the usability and experience factor.

This company can do all the tracking analytics or feature addition it wants but from a customer perspective - it's already lost it's charm and will keep loosing more if it now starts to provide explanation as to why the a) & b) were good decisions.

There is common sense and ingenuity which is only a matter of difference in thinking to preserve a brand's individuality - here the approach was blind automation and collection of data whereas in the other example of company A was insight based corrections to advance customer experience.

I think you have guessed by now but if you are still wondering & want to know the company names which is irrelevant to me - as the fact and learning of the differentiators here - company A was the top worldwide retail e-com giant, company B was worlds topmost technology company (in technology products market place - known for quality products)

Wednesday, December 11, 2019

Technical beats in the rhythm…

Let’s talk a little bit about the harmony of the IT system of the current era and the kind of beats which are in action to make the best rhythm’ positioning themselves for the future.

API’s: 
Almost all the established organizations which have been in business for a couple of years to couple of decades are focusing on providing access points for their service consumption and are using cloud based hosting’s to provide immediate and connectivity to any small to large business partner or end user aiming to use them. Opportunity & pricing vary based consumer but in the day of personized apps, the prize goes to the early adaptors and so do the returns for organizations providing a ‘better’ service (secure, scalable, extensible along with the parameters of best in class, elegant, transformational, transparent, extensible, reusable).

The total estimated return on API’s alone is at 1$ trillion economic profit globally, the estimated API’s availability tripling within the next 12 months.

Adoption stands at 55% as of today.

Security:
Starting from manufacturing separate chips for security to providing security over the cloud via a dedicated installation, every effort currently is towards keeping security as the key driver for any commodity available on the digital space.

With a 35 times rate of growth in 13 years, total spend on security by 2021 will surpass $1 trillion with a net 46 – 50 billion$ growth in next 3 years.

With that kind of projection launch of products like Nitro is without doubt a step in the right direction.

Cloud:
Everyone has been talking about cloud from a long time now, transitions start with analyzing the company assets which can be ported over, trends suggest the right security, compliance & adoption models help form the backbone of a well-defined cloud stack for a company.

Almost each substantial firm has a virtual private cloud stack with governance in place.

17.5 percent growth, $214.3 billion, paced at around 3 times the growth of the overall IT services.

More companies will move over to the cloud by 2022 with a $331B towards profitability from cloud adoption alone according to the latest reports by Gartner, et al.
Plans are in place to move mainframe/legacy application stacks are moving over to the cloud gradually.

Database:
400 Zettabytes of data for IOT alone in an year, this is just a speck of what can be a visualization of the amount of data involved in future – that’s the data to be stored and retrieved and operated on – the function access should be quick with lightening fast retrieval for computational needs.

The current popularity and consumption of the SQL/Non-SQL database stands as below - SQL – 60%
NoSQL – 39.5%

MySQL, PostgreSQL & MongoDB together holding 60-70% of the popularity share. 

With Redis, Cassandra & Oracle coming next – redis being the highest, close to 8%

Sequential databases with vendor lock in are a no no and most optimal combinations are those with SQL + NoSQL databases contributing to 60-70% of architectures which are leading the future trends.

OS - Windows to Linux – 80% - transition to a stable & more optimum model with less volatility on licensing modifications.

Strange but true, transition is happening around this space as well. Most of the readily available small chips are using the OS of choice – linux and more platforms and services offline and on the cloud are using operating systems which are based on linux.

SAAS: 
SAAS providers (most of the software providers are using one brand of the services – includes giants like salesforce, splunk… etc)
With elaborate dashboards and the capability to establish scenarios and capability to dealing with scenarios forms the backbone of the Saas enabled forecasting providers.

The software as a service revenue is almost expected to double by 2022 from what it was in 4 years back, hitting a forecast of 145.7 billion $ worldwide.      

System Integrators:
System integrators specialize on integration of the key areas Banking, Transportation, Telecommunication, Healthcare, Retail, etc. 

These set of providers are expected to increase 3.2% in spending totaling to 3.8 billion $ in 2019 – 2020.

Data lakes:
With the advent of big data and the need to large amount of data to be stored and retrieved there is an upcoming trend towards the presence of a data lake – which is more like a lake of data with streams of data coming in from various incoming data sources, storing all the data which is easily accessible and available for insights. This can be compared to a data mart which is a subset of data like a packaged bottle of water from the lake.

Nearly all established organization’s have taken a step towards establishing a data lake which holds the data at the leaf level in untransformed or nearly transformed state.

Again, expected at 35% growth rate annually, the total cost of the data lake market in US alone is expected to reach 37901.32 $ million in spending, so investing in the same at this point makes it a beneficial factor over long term – ‘start early’.

Analytics:
Another key area of adaptation is the analytics wherein each firm is focusing on gaining more insights by diversifying their analytical capabilities by pushing more information available to derive a key insight.

Expected business analytics forecasted revenue to reach around $260 billion by 2022 with an annual 11 -13% increase over year. This means better revenues with optimal forecasts for companies – retail at 13.5% CAGR & banking at 13.2% CAGR are projected by business data analytics alone.

So again, some good sense to invest in data analytics.

This is just a glimpse – it doesn’t cover the other key buzzwords like machine learning, AI or blockchain currently but the above can be the drivers leading to those areas which expect to be major revenue generators over the next few years.


Sunday, November 10, 2019

Intelligent Tools – how a little AI could help!


My past weekend started with what seemed to be a burly hit to my information & privacy which appeared have been compromised. The part of the fact more amusing was the realisation that these two were probably the largest firms in global information space & had the best mechanisms in place for security of data. 

Let’s delve into what happened & what can we learn of it.

The what! 

1st problem – a charge on my credit card of 1k stating that was put because I bought a handheld device but didn’t trade in my previous device, when I accurately remembered giving it in hands of an employee of the official outlet of that firm.

2nd problem – receiving an email, that somebody modified my online account information, precisely my email address was changed, and phone number was modified, the only issue was that might have probably contained my payment card information. When I tried to login into the account, I wasn’t able to as the site said account did not exist, the email address sent in my email notification for the change - which was used to login was not mine and I had no access for using any backup phone to send the reset instructions as the phone number info was changed as well.

I tried to deal with the above 2 issues – sequentially measuring the potential impact.. 

The how! – 1st problem – I rushed over to the store which had placed the charge on my account and subsequently called my bank. Since the charges were already put on the account they couldn’t be reversed, so I started querying with the store staff and asked them to investigate what had happened. The staff was co-operative and asked me for a bunch of information which included the order details and the email which said that my trade-in never reached the store and had to be cancelled. It took some time for them to locate the order and after about 30 minutes of waiting, I got someone asking me for the phone details which I turned in. The funny part about all this was that the previous phone was bought from the same store earlier and my online account should have the history but anyways I was able to do some search. Finally, I was asked for IMEI number of the phone, which was a hard thing to find out but I probably got lucky by looking at my old emails and pulling out the IMEI from one of them. Then the wait started again, this time it was around 45 mins but I did receive some pieces of update from the staff after 25 minutes stating they had trouble locating their database and figuring out what had gone wrong and I might need to just wait some more time. No breakfast and it was already lunch time so the wait was wearing on me gradually but I was hopeful. 
At last, after about an hour an half I got the notification from the staff that my phone was not considered a trade-in as it was an upgrade and I had been charged for the pending amount on that phone, a concession was given towards the cost & no charges were to be applied going further on the old phone & new phone, which seemed to make sense. The only thing that didn’t make any sense was – 
·      The long wait to get it..
·      The information was already there in the order.. 
·      The order didn’t have sufficient instructions which gave details about what would be given back to me as a proof once my phone was turned in.


The fun part! Intelligent tools .. 

                        Here’s where I think an artificially intelligent assistant on the entire software suite of the firm would have helped – just given the order number, it would have been able to pull up the details, ascertain the exact issue based on the sequence of events happening on that order using the cause and effect model and the staff would have been more focused on attending to other customers instead of running around to get these details. Augmentation of AI is a crucial step towards assisting any employee to get the work done fast and not replace them, it is important as it saves me time as a customer, it saves the staff time and it improves the process by providing recommendations back to the company what to include in its information section for customers. Although everything comes under the banner of one firm but many a times a lot of partners are involved, layers which make it difficult to get the information fast. This kind of assistant can query and report quickly bypassing the need for a handshake turnaround every-time.

The how! - 2nd problem – 
            A typical hacker pattern is to attack in a way that the comprise has little to no chances of getting reversed by the affected user, the buy-out is always the time, which gives the hacker to exploit the information in ways possible to cause maximum damage. That’s what was running on my mind as I saw that I cannot access my account and whatever I provide to recover it would not be sufficient – if the phone was changed then the information would be directly going to the hacker.  Two step verification is an important activity, but artificially intelligent mechanisms can be used to exploit it quicker.

I tried to reach the site manual for reporting the issue, the site asked me more questions on when I did the purchase and when I last accessed the account, which again I had to dig in into and then provide but that didn’t resolve the problem, so I thought I would go ahead and chat with someone who can assist. 

It took me sometime to explain the problem to them as I was thinking the nature of events was looking like a compromise in transit. The person assisting me asked me the same set of questions which were asked in the site manual. Now since I was doing the same process over again, my mind started to think, I saw the company sends a verification email so if there is an event which triggered my email getting changed, I should have a verification email (important learning- never delete a verification email, serves a lot), although to my dismay, I couldn’t find one. Then I checked all the emails for that service and I could just find 2 of them – one which contained the fact that my email was changed & other which was my attempt to report it. Then I checked my other emails and saw few emails on my other email account - finally I could figure out what had gone wrong here. It seemed somebody opened an account but used my email and then realized and changed it. The good part – the ticket was raised, and the firm was notified.

The fun part! – intelligent tools, again – here the assistant if it was intelligent could predict the pattern and avoid a mistake which caused this event in first place by looking at the timeline and customer profile information and suggesting what went wrong instantaneously. Further it could suggest method to the company of such patterns in order for them to provide a better means to handle them in future. 

The good thing from both events was that both the companies had staff which did their best at assisting with the available tools they had.

(well, if anyone is still interested in the details of the firms & not guessed by now, first one was the mobile giant, 2nd one was software giant, both competitors ;-)).

Sunday, October 6, 2019

The world is but a set of algorithms designed to keep you engaged & spending...

Lately have been doing a lot of thinking and I don't really understand where to place this write-up in, whether it should be technical or something which eludicates to a general appeal so I am putting this in both channels.

Human level of intelligence is basically understood in terms of how the average person relates to his environment and reacts to his surroundings.

Many companies these days use data to harvest a pattern of behaviour from the human reaction - let's take an example -

The web is everyone's playground but silently behind the scenes there is someone watching how we are playing - you can translate it to our buying patterns, social interaction behaviour, reaction on non social but emotional or logical term, it's there - every time we click a 'like' button or react to a post there is a pattern positivity added to our profile, which might make us think we are writing our own stories with the endings that we want. Well, part of it is true but a part of it ain't very true or might be scary as well for people who consider rapid progress essential for human growth.

How? - well each time we click or react on the web we answer to an algorithm designed on a specific set of parameters - having a clear set of outcomes on the behavioural scale, the more truly we react, the more clearly our behaviour gets captured. Wherever there is a low probability & confusion on the established behaviour a superimposition to react to a different set of situations leading to a definite behaviour is triggered, in turn deciding very accurately what we like dislike and probably might like.

Now you might be thinking - well, how can this map be beneficial but before that - let's examine an algorithm that mankind has put in place unknowingly.

Let's consider the dating game - ever wondered if the sequence of events & the essential holidays of any country throughout the year, there is one thing common in all the holidays - a perception is geared to get you favourable conditions for finding your partner and settling down, it starts with the new year bash common for all, then comes valentines' day, summer - best time to experiment, goes into halloween and thanksgiving to get introduced to others in the family to Christmas, may be a time to settle down. It is seen each region of the world has holiday's which might be having similar cultural impact, so it all transforms into an algorithm, put in the right variables at the right time and you get the intended result.

Now let's come back to patterns which we were looking towards deciding a person's behaviour, at the end of the day - it's all business, the difference is how can we get anyone to buy a ten dollar ring at thousand dollars? if someone can know your behavioural reaction towards a product, they can gauge the pattern you prefer and in turn put the price, the challenge here is being satisfied with the pattern that we opt for - now that's where intelligence comes into picture. AI captures the results from your behaviour and puts in a different model so when someone is selling you shoes one day and clothes another day - they can know what kind of clothes you might want to go for. This is not the usual you bought something so you would get another one of the same sort - no - this is not capturing your product response but is essentially going into the root of your perception which decides your taste.

Now's what's the disadvantage here which currently most of the firms might not or might be ignoring, it's called the 'alice in wonderland paradox' - if alice is in wonderland all the time, she might start getting bored of the concept of wonderland, that's critical for anyone to understand - as that is criteria where now the balance of what you would have liked and what you might like has been lost, in other words - the common man just entered the confusional decision state, either that or they are enjoying the wonderland that is created, unless and until - the person has got reasoning to question the wonderland itself thus causing the creators to make a better wonderland.

The battle goes on and will go on... probably we will get better much better products but as decision makers and approvers, we always have a far greater capability to decision right now to create a better choice...


Monday, August 26, 2019

Starting small with induced artificial intelligence.. the real life game approach to small assistant level tasks.

To start with this topic, I want to go in directly into a scenario which can serve as a simple case of intelligence in a program. We will talk about it and later will see how we can develop a program to solve for that scenario.

Suppose you have a group of people, let's say you have friends or you joined a new team and now you've got a discussion going on using mobile phones and group sms technology. Now consider what you have access to here is just the phone numbers flashing on the screen as soon as someone types a message and the communication text which the person sent. So in other words, you don't have any idea who the person typing is other than their last text.

Question is -

A) how long will it take for you to identify all the members of your group correctly by their texts?
B) how long will it take for you to identify at least half of the members of your group?
C) would you be contributing in any way leading to the identification of the members? (you can't ask them directly - who's this? - most of the times you have to take a passive approach).

The concept used above can be given any complex terminology name but essentially it's a combination of two factors -

1. behavioural prediction recognition.
2. identifying direct pointers between members talks (for example somebody addressing another person by their name or someone in authoritative position asking for a timeline based result).

It's some time for a human mind to identify all the people - for example when I did this experiment with a group of 25 odd people - it took me 5 days to reach half level identification mark when participants were communicating for 12-16 hrs a day with me on passive end most of the times.

If done correctly this concept uses phrases between 2 sets of people to understand their relativistic positions and roles and it also takes into fact the burst of communication, the quickness, the urgency or the timeliness, if this combined with a plethora of other factors in a working environment - it can expedite this identification process exponentially but is not preferred as this is a typical case of self training.

Why develop something like the above?

Think about it - any communication established with identification parameters is subject to that communication being reliant or based on those parameters and the scope of the output governed by those parameter roles, here were are removing the roles and letting the system decide the roles based on communication, essentially reversing the identification process. This process can be used to -

A) identify a pattern.
B) maintain that the pattern is uniform.
C) detect any changes on the pattern in due course leading to an indication of a security flaw.

This is just one basic use of above kind of area where this concept can be applied, another area can be new environments or unknown environments, the possibilities are vast.

In my next post, I will try to put an algorithm and develop a program to perform the above identification.

Well - now at least with such a program - I don't have to spend time guessing who it may be I am talking to or being impolite to ask that question in a group? ;).



Let's talk about some recent events & how intelligent computing could have helped..

A lot of us might have heard the news of earthquakes which rattled the western regions in America's.

When we think about it & wonder what was the worst part of these events -
    1) the fact that they occurred.
    2) the fact that they didn't give a prior notice before occurring.
    3) the fact that they might have been prevented.

Being an earthquake prone region, an occurrence of earthquake shouldn't be surprising but what is surprising is that the prediction models have just been constrained to the occurrence and not the simulated reality around it. Difficult to understand?

Let's take an example here - you wake up or maybe you are in the midst of doing something and you get an 'alert'(you can call it 'notification' - not usual time wasters but something like this)
- titled - 'seismic activity prediction' - (when expanded reads something like this) -

'there is a 75% probability of a seismic activity measuring 6 - 7.5 on the richter scale today between 2 & 3 pm EST. You might want to consider moving the following items - X, Y & Z - to a safer place to avoid losses or consequential damage costing upto 20% of your monthly earnings for the next 12 months.'

This is what current technology can do - they can simulate learning with prediction models to evaluate the risk of an earthquake and then predict the asset losses which can occur when such an activity occurs.

How - ask the right questions to the earth & earth's core & subsequent questions for changes in climatic conditions of atmosphere of different places on the seismic or fault zones and then establish a relation between them, perform deep learning using each model to arrive at most probabilistic activity.

Ask the right questions to know a  persons key items which they would not prefer loosing. The trick here is not a direct question but a question which forms a part of an answer to the direct question. If done correctly - the system should be able to predict the key items the consumer would not want to loose.

Then the only other thing left to do is to establish a relation between activity A & activity B, A being a seismic event & B - the affect it can have on that person.

Above appears to be simple when put forward but needs a lot of deep learning & artificial intelligence  arriving at conclusion when estimating the potential of a factor hampering the user.

Let's take another example - a famous company took a decision one day to remove the charge light or indicator which indicates a lap is charged or not from their chords, they also took the chords back 2 steps by keeping them non-magnetic. In short - however they want to interpret it but they lost the battle of savings vs indegenuity (their product turning backwards from a quality focussed one to an ordinary one) which will not be easy to get back unless they do something creative and out of the world.

Would this decision have been taken by an intelligent system - it wouldn't have gone backwards, as that system would have been able to relate to the consumption pattern accurately.

This way an intelligent system could help to aid in design decisions for crucial design areas and non-crucial but significant quality areas.





Monday, July 1, 2019

Using artificial intelligence in every day routine tasks.. and the need to make it simple.

The concept of a vacuum cleaner has become 'old' and so has the concept of a fridge, and so on .. my list goes on...

What I mean to say here is simple - simplify technology - as of today, technology doesn't create enough to 'create' but 'creates' to the acumen of the business man selling the technology not the 'creative' who really wants to unfold it to human advantage and make things simple.

'In short all the genius' have slowly turned into business men and minds with technology acumen have been limited to a few insane enthusiasts (this may be again my retarded & technologically enthusiastic thinking).

Alright, back to the topic - so what do I mean when I say using AI in every day routine tasks, let's start with a vacuum cleaner - we all know the 'roomba vacuum cleaner' - but this concept is more of a wastage of a smart vacuum as currently it has no logic to recognise the space it is vacuuming, the time it is vacuuming or the efficiency factor, no analytics and no knowledge of what pressure needs to be adjusted for the surface it is vacuuming, no knowledge graph and no talking back. Hence it's but a dumb machine of 21st century. Quite depleted in terms of technology - forget about it talking to any other device.

The same kind of features with fridge - the modern fridge should be able to adjust its shelves and de-shelf the products which have expired separate them and indicate to the user which items need to be thrown away and which items need to be bought.  Oh no - I am not talking about another worthless 'notification' - costing me a second of my life, somebody just thought of notifications and like crazy everyone else started using it, so much so that with current routines - half the life - people can spend checking notifications like zombie's (which don't carry any benefit to their lives), so please - no more notifications which waste my time(as I am a mere mortal with counted time on earth). My fridge should be able to detect and start pushing waste to the waste bin, and this should be monitored so that if the waste is coming in more from external sources, it can balance the same. The sensors on the fridge should be capable of smell & detecting if there is an expired product which might affect other product & intelligently suggest the placement of products.

And what's happening with these so called 'automated' lights, cmon - it's not automated, it's just an extended switch controlled again by you - it was automated, if it could detect you were coming back precisely around some time and adjust the on and off of the light without the person interfering, same thing with the heating & cooling, the system should be sensible and should be cheap else no one gets benefited. Each electronic equipment should be able to talk to another and gather statistics to help ease the process, if half of my time goes controlling or giving instructions then I am not benefitting even a bit from it, instead I am the driver all the time, technology hasn't helped at all.

So how do we create the ecosystem to be more useful - next time when designing an ecosystem thinking of contexts, learning & trying not to build another app (really tough for me to understand how people manage so many apps) just a single simple interface to accomplish tasks easily.

Computer operating system is another area - which hasn't improved, till now my computer should have the ability to sort stuff out for me, simply arranging the items, doing an elegant backup, by itself, it's so old fashioned, we have to prompt it at every step. Sort my image files - that's it and it should present a view to the user - sorted, unsorted and ask us to choose(I hope you are not thinking about traditional sorting, I am talking sorting with contexts in mind, just like I would do it myself, I can train it if required), but no operating system does that - it's still amazes me sometimes as if we are living 21st century... the good thing ... lots of possibilities and innovation scope ... it's not even begun yet .. trust me.. long way to go.


Sunday, June 2, 2019

ReactJS ... a closer look!

ReactJS - a look at the key features & simple 

What is ReactJS? 

ReactJS basically is an open-source JavaScript library which is used for building user interfaces specifically for single page applications.

  • support the concept of components
  • change data without reloading the page
  • corresponds to view in MVC architecture
  • can be used in conjunction with other libraries/frameworks e.g. Angular.

Features: 


  • JSX : javascript which allows HTML quoting.
  • ReactNative: got native libraries which provides react architecture to native components
  • Single-way data flow: properties flow down, actions flow up
  • Virtual DOM: React works on top of real DOM to enable changed components to render.

Simple/light weight, easy to learn, data binding, performance, testing.

LifeCycle of ReactJS? 

3 phases - mounting, updating & unmounting

Mounting (first render) - initialisation --> component will mount -> render -> component did mount

Updating (props change/state change) - component will receive properties --> should component update (yes/no) -> (if yes) component will update -> render -> component did update

Unmounting - component will unmount

Key components - 

Package.json : defines start script, build script, run test scripts.
index.html: single page application framework - div has id of root which holds the app component(if needed: bootstrap goes here)
src/index.js: entry point to react code - import of reactDom, react app component - renders react component in id of root.
serviceworker: for progressive web apps & calls.
app.js: class app extends component which comes from react library, it's got a render method(lifecycle method) & renders a jsx.
app.css: global css.

Installation/Startup 

Step 1: Download & install node js
Step 2 : npx create-react-app my-app
Step 3 : start the app - npm start

Running a simple code - 

ReactDom gets started up & renders the react component in the root id of the div in html, any changes get automatically re-loaded as it's a hot reload.

on loading the html page - the script gets invoked importing react DOM which loads the app.js, the lifecycle of react is invoked which in turn renders the page as a single page app.

Any event will trigger a reload and render of the component modified.

Reference code is available in GitHub - create-react-app.


Thursday, May 23, 2019

Electronic security - another perspective with the change in aspect of usage..

Data & transactional security is a key component in any application being exposed on the internet.

The usage statistics of consumers show a complete reversal of equation on the electronic versus physical means to complete a transaction in today's times, which has lead to changes in the way security compromise could happen with just a small 'unnoticeable' glitch in the application equation.

Let's illustrate this with an example which probably might resonate to some modern transportation network companies app in New York lately. The incident happened when a group of consumers boarded the transportation cab and asked the driver to change the destination address mid-way, in turn pushing the electronic device - in this case phone to be accessible to them, once they got hold of the phone - they changed the debit & payment in the app to redirect to their account in turn receiving complete earning for the cab driver into their account. 

Now's dissect this to understand what happened here.

Incident -- primarily classified as robbery or theft - earlier might have happened via attackers taking the physical money - if this was 20 years back.

Ownership - well, primarily the cab driver as he didn't setup a password for that app access - if this happened earlier - 20 years back - again cab driver as he didn't store the money securely(but risk would have been lower as that day's money would be affected).

Impact - both the cab driver, app usage and the transportation company - 20 years back - this might have been either the transportation company or insurance in turn affecting the transportation company.

So what changed above? Ease of access - both for the cab driver & the person stealing the money.

How can this be prevented in near future? 

- Key software changes to the app which don't allow the modifications to bank account to be done without any additional authentication.
- Any such event might need more monitoring & logging.

(theory of blame game above - if it was a hosted payment gateway - you could always put that gateway on the blame end but in the end - company image suffers).

How can this be prevented in future?

- Deploy security intelligence & agents on phone -  learning mechanisms of usage on the device & activating additional input captures to increase the transaction compromise levels.

for e.g. - activating camera, or scan if on iPhone devices to capture user image, key press, finger print capture, increasing authentication from 2 level to 3 levels.

The deep learning mechanisms sits on the phone and consistantly learns from user behaviours and then sets the contexts to a defined set of parameters on the usage pattern. 

Whenever it detects the pattern parameters exceeding the boundaries, it starts preparing for a defence of a compromise. A calculated risk level associated with the assessed limits boundary threshold would indicate the shaping of the defence compromise level which needs to be applied to the transaction. 

The theory is always to build a wall a 'bit' higher than you have for others but the only difference is the wall changes the height dynamically when needed leaving the person trying to cross it guessing most of the times!


Reference to the incident link(for anyone interested) - http://gothamist.com/2019/05/23/robbers_grab_lyft_drivers_phones_an.php








Wednesday, May 22, 2019

Eliminate the 'right' middleman to achieve transformational agility & prevent leaking costs..

Any industry be it Retail, Manufacturing, Healthcare, Education, Pharmaceutical, Mining, etc will have a middlemen in the whole process starting from procurement to consumption. Always there is a talk on eliminating the 'middleman' which just pertains to removing the layers and gaining advantages on costs.

Well, my discussion of today is on identifying the 'middleman' correctly - as 'middleman' can be a physical entity, a human interface or a 'thought process' which  seeps in and add costs, budget to the overall process expenditure.

Lets take a case in IT perspective (where commodity is software) - most of us get an opportunity to start small as a basic developer while coding for a particular product. Some of us also have an opportunity to sell the product & see it earn acceptance via a huge chunk of users but then why most of the organisations suffer to deliver a product quickly or adapt a change or transformation quickly, after all it's a bunch of developers working to flush out a change or product teams working to revise the process. Well, it's because of the 'middleman' & here it can be either the 'thought process' or that the talent which created the product and is no longer present.

You might think that a group or a bunch of people who work to create something exceptional might be rewarded to stay and shape it, well - may not be, as someone who creates a product understood by another might just be taken over by a 'middleman' posing as a creator to resell it back to the management to generate long term value. Insecurity of the middleman might be the reason for developer to not be rewarded or be completely swapped out & it's very common, the result - focus shift & cost to company at the end (but who cares), question is ---> How do you identify such a scenario - doesn't exist & it might have broader implications on long term costs/agility?

It's simple - ask for changes on the product and the cost generation quotient will start triggering. More people needed, learning curve, chaos, developers leaving the project/organisation, blame game. All these are costs to the company. A good pattern to visualise is when number of people entering and leaving a group is more or less - constant, changes to teams impact costs, value generation and loose the objective.

A creator always tries to enhance the first built product, a creator always has a roadmap, a middleman never has a complete roadmap or complete understanding of the product.

This is just one case, other case is when we are working in a particular setting our thought process resists change & hence we try to use the route we always travel. The creator would have a number of ideas in the mind but wouldn't share the same as they have always been questioned to follow the predefined route. This is where the stale thinking needs to go with taking small risks, identifying areas where a simple change might not be risky but be beneficial on a large scale, key is agility of the change and immediate inception & adoption rate. Benefits are new thought processes and team confidence, latest process/technology adoption and company adoption to stay ahead or compete with competitors.

Key factors enhancing this are automation at various levels, automation serves to build up maturity in processes and right analytics serves to build up long term product roadmap.

The biggest drawback to above synergy is the developed thought process via a group of people resisting change, this is where the management needs to step in to break that chain of thought.

Hence it's important to - eliminate the 'right' middleman to obtain technological success trend & process agility.


Wednesday, May 8, 2019

Simple text profile analyser using brain.js...simple ML

This post will cover - how to create a simple text analyser to classify text based on a training set data into one or another profile.

We will use 'brain.js' for the neural network used to train on the data.

This can be done in the following steps -
  1. Create a profile for a particular type of text - provide adequate training data.
  2. Create training data with input, output structure.
  3. Write a script to injest data splitting in the input & output format & process training data.
  4. Train the network to iterate with input & outputs.
  5. Execute the algorithm with new input on the trained neural net.
  6. Create an html to host the scripts which would be run on the browser console.
Sample training data


const trainingData = [
    {
        input: "Very well, thank you!",
        output: { at: 1 }
    },{
        input: "Inside my baby's room", 
        output: { a1: 1 }
    },
.
.
.
]


Html page - will host a call to script - can be named text-analyser.js - this will run the 'execute function and take an input text - for e.g. - 'my daily activity'.
   -- import js with training data
   -- import js with text-analyser
   -- import brain.js for the neural net.

Text-Analyser.js - this will host the code for - 

1. processing training data


function processTrainingData(data) {
    return data.map(d => {
        return {
            input: encode(d.input),
            output: d.output
        }
    })
}


2. train using the training data 


function train(data) {
    let net = new brain.NeuralNetwork();
    net.train(processTrainingData(data));
    trainedNet = net.toFunction();
};


3. execute the code with the training completed on new input


function execute(input) {
    let results = trainedNet(encode(input));
    console.log(results)
    let output;
    let certainty;
    if (results.at > results.a1) {
        output = 'Type A'
        certainty = Math.floor(results.at * 100)
    } else { 
        output = 'Type B'
        certainty = Math.floor(results.a1 * 100)
    }

    return "I'm " + certainty + "% sure that text was of  " + output;
}



All together below statements will execute the code when the - html page is opened up in browser and output will be visible in console.

train(trainingData);
console.log(execute("This is my sweet child!"));


Reference : my text..ai.



Tuesday, May 7, 2019

Few Google I/O 2019 highlights ... from a dev perspective!


Duplex Web 

 - Allows for booking appointments on website via assistant.

Next Gen Google Assistant

- Quickness - performance enhancement, opens & switches between apps very quickly.
- Enabling support for - 'how to' item sites, smart displays automatically loaded with right visualization
- How to template added to actions console.
- Voice based entry points from assistant into the app - health & fitness, finance, ride-sharing, food ordering.
    --> use your voice to start the run with nike run club  -  simple intent to deep link mapping to app.
    --> interactive canvas - full screen display - using voice, visual & touch - HQ trivia game updated with this experience.

ML enhancements

- ML transported to the site of consumption - handhelds, phones, smart phones - footprint of ML reduced to 0.5 gb via recurrent NN in ML kit.
- Vision -> landmark detection, image labelling, barcode scanning, face detection, Natural language - language detection, smart reply,  Custom - model serving.
- On device translation API for 59 language support.
- Object detection combined with product search API to search retail product effectively.
- AV, VR - apps like IKEA use the above APIs.
- Auto ML - can train accurate models on own data sets, Cloud AutoML tables - ingest & predict ML models.
- Video labelling & video intelligence to search & logically arrange the video content.
- Cloud TPUs reduce speed of training to significantly faster speeds - TPU pods.
- Open source tensor flow - researchers & business 2.0 launched - ML - intuitive models, javascript developers can use node js to deploy models to tensor flow out of the box, tensor flow lite installed - is very fast.
- Federated handoff

Firebase enhancements 

- build your app with fully managed backends, provide monitoring & provide better insights with FB & ML kit - auto vision image edge.
- upload images, click to train model & publish image.
- 1. Auto ML dataset creation, 2. upload of images, 3 train - latency, accuracy - how to train - select training time, once training is finished - evaluation provided with precision percentage & details.
- 4. step - publish the model, 5. push to app, 6. app will dynamically download the model and use it.
- Performance monitoring - startup time, responsiveness - expanded to web - available in beta instantly.

Web platform - Chrome enhancements

 -- reducing startup time - reduced by 50%, v8 - js engine, uses 20% less memory.
 - image lazy loading - add 'loading' attribute to the image - via checking for factors like connection speed, few kilobytes loads an image with a reduced size.
- lighthouse - budget enforced - like 200 kb size, target metrics - page load time - connects with servers to maintain the response within the budget.
- Google duo for web - progressive web app, light and can be used as an immersive experience.
- Google web search uses latest features behind the scenes.
- Web security - all traffic moved to https, private and secure cookies, easy private controls - tracking of sites from across web, anti-fingerprinting .
- web.dev - site created for building, help on web, optimisation on popular platforms like react.

Flutter enhancements

- Technical preview of flutter on the web.
- Write once and use on any device - android, iOS, mac, windows, web.. 
- Flutter sandbox unveils faster processing speeds.

Chromebooks

- Linus for chrome books available for devs - linux ready chrome books launched.

Monday, May 6, 2019

The social dilemma... need for some redesign...

Today almost everyone agrees and is able to comprehend the fact that the way top social sites are designed was to promote the user to spend more time within the site in order to achieve the goal of more hits to their usage.

Data is a key player when it comes to building something for a consumer & if you want to make sure that the consumer always contributes to a fair percentage of the sale, then getting the right data is very important.  Primary objective behind the above goal was to get as much data possible from the user & behind the scenes create an image of the user, like an avatar which has all the characteristics of the user and then experiment with the avatar for probabilistically favourable outcomes.

The means was simple, give the user something they would want so as to stay in the site - can be an image, a notification on a status update of a friend, relative or family member or a topic sensitive to the user at hand. Slowly it started getting out of control, which now leads to groups influencing decisions on individuals and minds.

The point about data leaking or being leaked is irrelevant as any data on the sites would be next to impossible 'to not leak' and still carry on with the incentive for user to come back without rapidly decreasing productivity.

The question is what needs to be done to make sure the social sites have sanity on the content created, & promoted within the realms of the social engine.

This is where machine learning and intelligence can help if used the right way but the fundamental architecture needs to change.

A simple paradigm is -- make it easy -- humans would use an interface to express their thoughts & perspectives and it is upto that interface to present them with the best possible way to do it.

Okay now you are confused what I am talking about... let me explain... 


Human uses an 'interface' --- here which is an 'intelligent assistant' and tell what they want to publish to the assistant -

For e.g. - I want to publish 10 photos, occasion is birthday party, post is to my close friends & title should be something exclamatory & energetic.

The 'assistant' looks at the content and gives back the suggestions on proposed layout and presentation, user either chooses it or makes adjustments to make sure it matches their creative insight behind the post.


Now what does this accomplish? 


Well, a few things -

1) Make the job of posting easy.
2) Reduces work of trying to find best combinations.
3) Goes behind the scenes and checks the content for security before considering it as eligible for posting.
4) Gives back to the engine feeds on creative redesign - this is the most interesting part - as human & machine minds are working together to create something new.
5)Also, encodes the communication via historic timeline using blockchain or crypto factor.

So, this is where the job would be much more simplified..

As of today, major social engineering sites are working to have behind the scenes machine learning algorithms scan content for predominently below key factors to make sure that the overall health factor and toxicity of communication doesn't degrade.

They monitor the below key points -

  • Shared attention - how much?
  • Shared reality - what kind?
  • Receptivity - how much people are liking it?
  • Variety of perspective - uniqueness factor, how much?
Then there are other facts like seeing trends or patterns, classifying communications as machine vs human generated and watching the trend of generation & propagation based on events, timeline & geographical regions where the communication initiated and followed thereafter. 
 These are like watchdogs which are already being pushed into the system to monitor, capture & notify when a given communication or channel poses to be an imminent problem. 
  Still I think there needs to be a lot done to make sure social sites are - safe, promote a healthy conversation and are not tools used by an exploiter to get information for money. 
  Also, a well engineered social platform would in turn incentivise the customer to give back the time spent on it via either a return program for any kind of monetary value or score so that the user gives a conscious effort to promote the right content.







Wednesday, May 1, 2019

With so much buzz about natural language processing, let talk about the core elements involved..

As of today natural language processing has taken various shapes and forms as a frequently used buzz word!

Before going into all the advantages or the 'hype' associated with it, let's talk about something 'core' - the framework or the main pieces involved here and how these work together.

The items that I am using for discussion here are mostly from leading players in conversational realm(like google, amazon, apple, df, microsoft & so on) - some pieces might have a different names but the idea is the same.

What is natural language processing?

Basically in simple words - an interface to which you feed in input in any given language which gets interpreted and processed to give back an intent or set of intents.

The language in natural form is called as 'utterance' and after processing, where most of the magic happens get transformed to intent or purpose.

There can be some processing applied to an 'utterance' before it is being submitted into the 'magic box' like spell checking if initiated from a chatbot or checking for commonly misused words which can lead to errors, early error detection.

Once the input is fed into the 'magic box' the result is compared with a given 'confidence factor' which can shift based on the maturity of the associated realm of context, if the result is within the confidence factor - well - the 'magic' worked and an intent is resolved. If the 'confidence factor' was too low then a 'fallback' intent is triggered.

The key here is to
       - A) understand how the 'magic box' works to interpret the utterances to intents.
       - B)  how can fallbacks be shaped into known intents during the course of time.

The part A) - is what is called as 'machine learning' - which can have any known learning engine to process the data to apply the algorithm or set of algorithms to get the output.

Some of these input factors work backwards like setting up 'entities' which can have synonyms or know inputs - which can give exact language parse mapping, but the core piece here is does the system posses intelligence?

If I feed the system with C & D today and tell it the formula to compute E, if there is a variation is the formula - will the system adjust and recognise the variation to process E correctly, that's the key - for the learning factor over a period of time.

How to do this? - unsupervised, supervised or re-inforcement - there are a number of ways.

part B) - is what is important in terms of identifying if what I am asking for is it
    - i) too complex? ii) in a different format? iii) or something with doesn't make sense.

most of the times it's i) or ii) but the process of getting there involves supervised learning and setting up the right labelling so that the system can recognise over a period of time how this works to make sense.

What tools to use? there are many - why not start with basic analytic tools and start working backwards.

This is just a very basic core framework - more specialised forms may include 'intent forecasting' - 'behaviour forecasting' and 'threat forecasting' using the core framework.

(For more details please refer to - google, microsoft or amazon conversational flow documentation)

In next post.. we'll try to cover another important topic ... 'data labelling'.


Sunday, April 28, 2019

Can you tell me why are you slow?

Have you considered this question ever in the context of a human being talking to another human being - of course you might have at some point in the time of your life journey or maybe not but may consider it in future but today's discussion is not about that.

So what are we talking about? we are looking at this same question triggered from a human to a machine or an application or an operating system.

In my earlier posts - I might have pointed out that the operating system hardware/software might not have undergone any radical changes to make it significantly differ in the way it interacts with the humans in lieu of technologies available today. We will talk about 2 topics -

1. Conversational error detection.
2. Operating system architecture (top level plausible components).

Imagine you are starting your day with a cup of coffee fully woken up & in the top state to finish your pile of plate with an accelerated pace and suddenly you get slowed down by the system not responding for some reason. In general sense now your whole focus shifts to remediation and you start closing screens, windows to get it working faster. Imagine if you didn't have to do that and you could just ask the system - 'Could you tell me why you so slow?' and the system would in turn - analyse and respond the top 3 reasons for it's slowness along with corresponding remediation action to which you could then respond to saying - 'Okay, let's try #1 or #2 or #3.' - wouldn't that be lovely?

So let's see what is needed to reach to the above stage:

  •  As of today most of the investigative activity is done by humans capturing and applying contexts to data to connect flows which make sense logically. 
  •  Contexts would come later in the game but the first comes data. The data flowing from one application to another and running through the operating system should be able to set the right footprint in order to be investigated. 
  • A change in perspective needs to happen, currently data footprint is created in a way - humans can interpret, so additional logging or accurate labelling needs to happen and this should happen in the core system as well the applications supported by it. 
  • One this is complete data can be correlated across a given context, the system should be able to fetch the context in question, apply the labelling to it & correlate to get the data, for example - why is the system slow is one aspect, another being - why is this application slow? both have different contexts for correlation. 
  • Now once the correlation is complete - application of algorithm has to happen to decide what is causal on the given event and what are the resultant action outcomes forecasted. 
  • This can be done via applying learning algorithms to operating system to achieve the resultant best forecast. 
  • This would in turn provide the result which are the reasons & plan of action.


So we got the set of steps on the software bit but another challenge is the hardware & framework - current operating system hardware frameworks might not be adequately suited for AI.

The chips are based on traditional concepts of a central processing unit, which was the best way to design nerve centre of computer systems, now with the advancement in AI - there should be some thought given to creating a separate chip for AI, this would help to decouple any issues arising out of a virused AI unit acting negatively. Similar to this - security should be a separate chip introduced into the hardware hence keeping it separate from being attacked or hacked.

So in short we are talking of a CPU along with AI chip, Security chip & Graphics interface to make the complete underlying hardware to enable the software to support advanced NLP, machine learning & AI capabilities in an OS. Now that may be an initial thought but still there is long way to go..



Tuesday, April 16, 2019

Some numbers & ways an organisation can pace towards ingraining intelligence to their fabric..

To begin with, a lot of talk is currently in progress citing importance of thinking in the direction of AI adoption. Nearly all the major technology disruptions in progress be it intelligence gathered via big data, internet of things, blockchain, cloud computing or flow based pattern recognition systems talk of their advancement or path towards collaborating in an Artificially Intelligent environment.

Question is why so let's talk about some numbers here.. 


As per surveys attempted by prominent survey giants 47% of firms are talking about adopting an AI focussed roadmap starting with embedding at least one AI capability in their business processes.

Out of these 20% are using AI in their core part of business processes.

30% are looking at piloting AI in one way or another over the course of next year or so.

Current spending although holds around one tenth of budget for 58% of the firms adopting AI but this is expected to grow in next 5 years to 71%.

Most of the firms use AI in marketing and sales (52%).

Value generation quotient in AI has been prominent in manufacturing & risk analysis - 41% reporting significant value & 37% percent  in marketing and sales reporting moderate value.


Primary areas of inception being robotic process automation, nlp & machine learning.

With all these numbers, question is how to start progressing in the direction adoption?


The first approach should be to think differently and move away from a siloed function centric mindset towards an integrated process centric mindset.

In order to do that, establish data points across systems to provide more streamlined flow of data across business units.

Establish a more ingrained process centric view which breaks down the approach into adoption areas based on adoption strategies for short and long term.

De-prioritize functions in favour of processes.

Think on purchasing AI services.

Processing & data management is key so approaches which allow for enhanced processing and storage management of data should be a priority.

Training data is key and initial training would need specilists supervision.

Analytics forms a key data entry point in any AI process adoption - so get your analytics right!

Although the adoption process is a long journey but above factors might be able to give a start to that path..

References & further reading - Mckinsey Reports, Forbes & O'rielly articles - AI adoption.

Saturday, April 13, 2019

Conversations interface - key points while architecting & updating versions..

Conversations are an important topic when it comes to designing conversational interfaces, the key challenges here are -

1. Each platform has their own framework to handle conversations.
2. Being an emerging area of research - platforms upgrade often on in ways which might seem as deterrent for product teams to embrace, hence slowing down pace of adoption.

Although, there might be other factors depending on area of induction or technology used but above 2 are most common.

In this post, I am going to cover with the help of dialog flow as a conversational platform - some techniques which might be useful to streamline this change process quicker.

Each conversation architecture has some key components during conversation journey -

1. outgoing message interface - conversation prompt - modular input based on multi-modular spectrum - with connected devices.
2. conversation closing interface
3. conversation contexts
4. locations interface (this can include geographical & physical address)
5. storage interface (or caching)
6. service invocation interface
7.  intent handler interface.
8. account linking & transactional interface
9. permissions interface.

The above interfaces form as key architectural units - the implementations can change but the essential point here is that even if a platform is going for an upgrade, your application framework & logic is decoupled with no impending impact.

There are multiple articles which cover migration strategies when migrating from any conversational platform like dialog flow v1 to dialogflow v2.

For e.g. - switching from tell to close or creating a suggestions object or permissions object via an available class rather than an assisted function or using promises to handle backend service calls.

A good article which covers this in the context of google is given below for reference -
https://medium.com/google-developer-experts/migration-points-for-upgrading-to-actions-on-google-nodejs-version-2-4640648ab8b5

Apart from this google has a bunch of documentation to support this (like the one here - https://dialogflow.com/docs/reference/v1-v2-migration-guide) but application development should be completely decoupled from the fact that any of the third party API can change and might not be backward compatible.

I am expecting a common framework to come into picture which establishes a convention to use conversational interfaces in the near future but the above core areas are fundamental to start with.

Sunday, March 31, 2019

The concept of machines as assistants..should we fear or should we embrace?

A lot of conversation today is driven by the fact that AI would be the genie providing a new miracles in every field and then there is the thought that AI will act in controlling and disruptive ways.

At this point though - it might be evident that it will be none of the above just when operating alone.

What do I mean by this?

Before we answer that let's understand why AI is different - as an approach.

Problem solving has always been a key factor in understanding logic, each problem has some inputs, a result and a method to reach to that result.

Traditional computation provides a means to reach to a result via following a logic - A.

In human learning - the idea is to understand the concept and find the similarities between a problem A & problem B so that you can break the construction of the logic in a way to find a solution which works for both problems A & B and maybe another set of problems on similar note.

So is the case with AI, the inputs & outputs are present the computer decides the best way to take the approach to find a solution and then apply the solution when new inputs are present.

Well, here comes the part which is interesting - because in order to find a solution - the system needs to ascertain a way which in turn needs data in terms of training to get to a solution. Real world scenarios provide data & humans would provide the confirmation that a set of results match to problem in question as output.

So in short - the intelligence is being driven by the capacity of the human decisioning. This is an important point, as long as this is controlled with the right perspective, the results would always be consistent and known. They can be optimised and creative but wouldn't be in any way disruptive, unknown or surprising, unless manipulated to do so.

Question is who can do that - well, it's got to start from humans supervising the models at some point but once trained the models can themselves start acting in ways which might not be in line with the human operating realms. Hence - there is a need for understanding how to build models which can act as guards identifying a pattern of source of issue, the whole supposition is - if you can control it - you can prevent it. Damage control reversal in learning models is long process hence the need for control, removal of affected neurons - retraining, validation & retrofitting back into the intelligence chain, again monitoring goes hand in hand with how much you want to monitor over a period of time.

So - in short - both human and 'AI' have to work together here at least while the structure is completely set up, which a long time itself.

Currently as we see a lot of benefits & examples of this in fashion industry, retail space, healthcare & space research.

The trends are seeping in very fast - players big and small have at least got a model or framework to  create learning sets - amazon's got amazon's sage maker, google's got - deep mind & api.ai, Facebook got FAIR & wit.ai, Apple's got ML engine init.ai, IBM - watson, Intel's got api.ai, Microsoft Azure ML & Oracle AI apps paler & crosswise.

Also the learning has been segmented and a lot of learning models are evident in supervised, unsupervised  & reinforcement learning spaces.

For e.g. for supervised learning model - there is classification & regression, in the unsupervised space there is clustering.

The question is each of these learning methodologies need to have adequate methods to detect favourable from in-favourable learning.. that's a road which would take some time to get to.


Sunday, March 24, 2019

Understanding the core function of any product...an opportunity for machines & humans to work together!

It's been a long time since companies have been developing products & services frequently consumed  by a wide gamut of population.

Some of these firms are leaders in their field of innovation, they set the pace for others to follow by breaking technological myths and setting up new dimensions & perspectives for the general consumer to use the product.

It becomes crucial for these firms to make sure that the critical or core function of the product still works without any issues and it needs to be completely fail safe.

However, it has been seen in past few days - that the critical core feature hasn't lived up to its expectations. There can be various reasons for this - some of them might be as follows -

1. core feature identification was flawed or skewed.
2. the fail scenarios were not properly evaluated.
3. third party issues which lead to the critical feature stop responding or responded erratically(although this is a virtual scenario & I will explain why in the next paragraph).

What am I talking about here?
Let's take an example to understand - let's say a popular mobile phone manufacturer makes an awesome mobile phone, it's tech trendy, quality proof, secure - now the manufacturer replaces a function which was originally mechanically controlled via a digital substitute - let's say - touch or haptic feedback.
Sometimes firms go forward with taking bigger risks like eliminating the mechanically controlled control point completely.  This is a perfect scenario for innovation but the major question here is -

a) is that mechanically controlled feature - a core functionality? how do we answer this? - best way is to invert the question and answer what if that feature is removed completely - can the device function without it? if no, then the answer is yes -  if answer is 'yes', then without question it 'has to work always'.
 - in the above example - this might be the 'home' button on the mobile device.

So we answered the first question(although I am sure, there will be well educated minds challenging the basic response to this common man question and more hillariously, justifying it too) - let's move on.

So what happens next - this core feature should work always - in both cases of upgrade - 1) changing the technology from mechanical to digital 2) removing the button completely - there should be enough scenarios to evaluate both the possibilities scenario set for failure. Also there should be third party integration evaluation so it is confirmed that the button does work all the time.

So here testing plays a major role, and the firms focus should shift to active testing not just by using automated methods but engaging vast number of scenarios. Question is how do we do that?

This brings up an interesting concept into focus - experience feeds in a lot of test cases - as long as both 1) & 2) have had sufficient time periods of being in existence and have had usage patterns which have been unique.

Then there is testing for future products in development but being launched in the 1 year timespan of the feature switch. The concept which helps here is let the humans and computers work together to perform this part of the product cycle validation. The possibilities here can be new found like using an AI engine to evaluate scenarios and mix & match them and test them throughly or have a third party dumb component created via an AI software to simulate a test which would allow the feature to be switched to a fallback mode and would test it's fallback coherence in that scenario.

Third party issues are ones wherein nobody takes the blame unfortunately there is nothing like third party issues, if the product is owned by a firm - it's not owned partially and every third party issue which is core to functionality should have a fallback built in so the third scenario is basically a virtual one.

Well, why so much of a hustle - because it seems one fine day when unfortunate customer 1 who was using the mobile device(via car play) with so much trust on the company making it - that he/she spent their savings to get it, couldn't activate the home button during driving to help with navigation even with voice failing to capture any command and had no option but to stop and restart the device, which might have costed them an accident or it might have gone worse.

Still many would disagree this is a core feature but the essence of the story here is when it comes to core feature identification more thought needs to be given to given current product standards.

In case if anyone is interested and might not have guessed - the product in above example was iPhone with latest software version installed(but that's not important as the above detail is..).