Monday, December 31, 2018

2019 ... a look into the future years

Not sure where to put this post in but I think it's important to discuss what holds in for the future and to understand a view point in terms of technology.

Currently one visionary holds most of the stake in terms of transportation of the future - whether it is on the ground for city to city or for long distance or even from one planet to another. There is a thought to re-use renewable energy and lessen the effect of pollution on the planet.
Transportation is going to be a major breakthrough going in future and so are things needed to make it happen.

It might be the opportunity to keep the best car right now which is run on gas as soon electric vehicles will replace the market, and it should be happening rather quick.

Primarily all the visionary firms currently are leaping in towards understanding how to make travel beyond earth possible and in that process is the process of understanding - how to build the foundations when you need to travel to a different place.

Most important is to understand the role of autonomous systems, there is a big focus on both connected systems and systems which function independently. With technologies like 5G coming into the market, there will be a big push to virtual reality and augmented reality with equal push on systems which can function or get trained in augmented reality.

If we need to build a new station even as a starting point a lot of foundation specific development needs to take place, that is where internet of things will come into picture, this concept will not just stay within the realms of the connected world but will go forward into the autonomous world as well.

Trends associate that there might be a  major firm coming into picture which might hold a stake into the autonomous equipments using modern day tools like artificial intelligence and learning to substitute good decision making.

Governance platforms will begin to come into picture which provide governance along the line of usage of artificial intelligence equipments, internet of artificial intelligent machines might be a buss world for future years.

With the Global reach becoming minimised, the rise of crypto and blockchain might help to standardise the processes across different segments of the world.

Although these are just speculations but there is a lot more in terms of what kind of technologies would develop to make sure above areas help shape the world into a better place for future.

Thursday, December 27, 2018

Let's talk about customer service... & some simple math...

Well, I might have originally not put this down but one experience prompted me to do a bit of justification on this topic by discussing about it.

Here is how it begins... as a code evangenlist every coder/developer wants to use the latest technologies available in best possible way & what more to ask for if they are available for 'free'.

Or 'are they'? Every-time we trespass on a piece of technological marvel we ask what's the cost and when we know it's free we pretty much 'forget' about it and start using it. Well, it's called a virtual free zone ... let's see how. Selling a product has various lure tactics to it, almost any product today would have some lure tactics involved - but ones which are fair and visible are the most beneficial as they give out the risks & advantages fairly.

As of today we use a lot of products off the cloud and start using them with the concept or notion of they being 'free' and available, so where do we lack they hit back at us at a later stage - may be in terms of capability limitation, data exposure, cost (yeah you heard it right - it was free wasn't it?) or performance(again you heard it - right - it was on the cloud wasn't it?). The first point where we presumably lack is asking the right questions the moment we start using the free product.

Suppose a well known service provider is giving a platform which allows using their software as a service but then it's free (well, it's not - it's just the matter of time when you found that out) - so day 1 you start using it, day 30 - you are quite satisfied with it, day 90 - you have already started putting that product out to a bigger base - started telling people about it, suddenly your product is peaking with everyone loving it and 'one fine day' - it stops functioning or doesn't function how it should (now what's the probability of that - well maybe 2%, but that probability was 2% when you started using it - it is now gone to be 60%). The only way to fix this is to buy a package which will get your product running again - but aah you were never told about - well, think on it - did you ask the right question?

Let's talk about another scenario - everything still running fine but you got bill on the product - and that is a figure which is 400% of what you capped your product budget on, you get amazed how that happened - when you get through to customer support - they tell you need to buy a support plan to give you information on why your bill shot up - so you think to yourself - seriously? - you got a free product based on face value of company selling it to you - well guess what - nothing comes on face value - you might have just learnt that - but then you have a more bitter experience since you are only trying to understand why your something which was capped at 20% max budget was charged at 400% (these days every big firm needs your credit card before you can start using their services), so now you have a headache of dealing with the customer support who will at best will assist you with information you already know - except that they will take away more time from that process making it lengithier which will probably negate your  future plans to pursue with that product, and they are getting paid for it, so unless you want to buy additional support to get more information, which again you shouldn't be getting so fast until you agree to a term plan, so apparently now you lost more than what you could think of when starting with a 'free' product. And think for some time on the company which put those people in customer support, the manager you escalated the call to, the vp-product justifying the cause, they are all getting paid - to do what? justify 'nothing', no new information, same old stuff and who is paying for them - some poor company trying to get their product working, now that's not funny is it?

And this doesn't end here - if you start to quote your experience on a channel - there will be qualified people starting to negate your story and the company working at a probability to improve it would be 0.02%, that's the sorry part - as they earned their degree to do something constructive but now are just trying to re-sell a costly product back which was originally put on sale as free. So long story short - nothing comes free, ask the right questions, any company (whether it's the top 3 or bottom 3 worldwide) has a right to explain their package and product details - if they fail to do that, in sufficient detail, the idea to proceed is always an unknown risk as face value can be lost in a jiffy. The next thing you know - the company is sitting with your data which you already lost, which you need to pay to get back, and you cannot be sure that that firm might not be hacked tomorrow. It's very simple math but still it becomes very difficult most of the times.

Well, I would not quote the name but the experience quoted above is from one of the worlds best technology product firms and a startup trying to use their services - but that part doesn't surprise me at all but the part on the buying habits of the consumers really does and will not stop surprising me!

One of my resolutions on the new year is being a more informed consumer because I love technology and would always want to use the right & best suited for the job but would need to be fairly informed as well.

Thursday, December 20, 2018

'Flutter Live' .... much needed single platform application development framework..high hopes on this one!

So it seems Google launched their stable release of flutter few days back.. let's deep dive into the details...

Before we begin - we must understand we are living in the mature age of mobile app development, gone are the days when mobile app development was a small child growing up and surprising all, it suddenly grew up too fast and now most have seemed to realise that a more mature approach is needed to build for mobile apps before it starts the downward trend.. so let's go into details.

Built by google developers, using native c, c++, along with dart & skin graphics engine, this engine gives the developer - embedded, mobile, web, mobile app, desktop along with forthcoming support for watchOS.

The whole experience is seamless across iOS or Android using the native platform's

Below are the major components of flutter -

1. Dart platform - the flutter experience is built using the dart language.
2. Flutter engine  - written in C++ & provides low level rendering support, engine is composed of a cross platform code
3. Foundation Library - again in Dart gives the api's to use & there is a huge library of widgets which can be plugged into the code base.
4. Design-Specific Widgets - an immutable description of a part of an interface, these can be hot loaded and viewed to suit the kind of design they are etched or used on.

4 characteristics which define flutter are -

1. Beautiful - controlled by widgets delivering pixel perfect experience.

2. Fast - 3d Skia graphics - compiled to native machine code.

3. Productive - stateful hot reload without restart to have the glimpse of the experience in a shot.

4. Open - can be extended and has no license.

Benefits - single application platform which can drive the focus more on the UI or UX aspect of the app design rather than dwelwing on the challenges of what will work on a platform and what won't and then getting through the complexities of code differentials on each platform.

Appears to a be a step further into culmination of monopoly of OS based development.

Some advantages of flutter

- Ahead of time compilation
- Recompute the animation before hand
- Skia GE is hardware accelerated and runs directly on graphics card
- Backward OS capability for both iOS and Android (at least 5 years)
- Develop in single location and publish to Android or iOS store in one go.
- Easily embed high speed widgets like video on the graphics design with ease.
- A widget inside a widget behaves like an actual widget seamlessly.

It's quick and easy to install - flavours exist for

iOS - directly integrates with XCode simple to install.
Linux - installs and runs on android emulator and studio
Android  (needless to say works embedded in android studio & emulator).

Refer to below resources from google for more information..

Resources -- https://flutter.io

I will try to put in a blog with an actual app development done using flutter in my next blog on this topic.

Thursday, December 13, 2018

Not property but algorithms might be the asset of future...so which algorithm do you own?

As awkward as it sounds, many will still find it hard to believe that future might not hold so much of a return on any other commodity which currently is topping the list of highly wanted assets as of today.

Artificial intelligence has gained so much momentum and popularity that probably the only significant observation and result of this has been fear of what it can do and how it can overtake humanity.

The above abstract form of envisioning a technological or sociatal change is the basic human psychology of approach when little is known about the result of the adaptation. Whenever we go from a quantified result set to an unquantified one chaos & chaos impending thoughts start ruling the brain.

Well, my point is we really need to open our eyes and start thinking about quantifying the use of artificial intelligence enabled assets which again means to enable proactive measures to control the capability and govern the result-set to be in the quantified so called ‘happy realm’ of the result horizon and to root out any negatives or negative result sets & establishing methods to study them making sure that they do not propagate further.

In short using AI to understand AI result set and make sure the capability grows as per needed outcome.

How is that achievable? - 1st step towards any artificial intelligent approach is choosing the right algorithm.

This is the 2nd stage in the development of artificial intelligent ecosystem.

The 0th stage can be training - in which nothing intelligent or artificial about it.  With the training in place - a machine is able to know relationships and implication or intent as a result of those relationships.

The 1st stage comes when those relationships are applied to newer inputs which haven’t been learned before - this is where the machine tries to bring an unknown result-set within a realm of known outcome and then tries to establish a relation which maps to that outcome, it can do so in many ways, one way might be asking the right questions. Once the realm is establish machine is able to learn the outcome and process of achieving that outcome and is able to apply it to new result-set & inputs.

At this stage there can be a monitor which would enable the mapping and learning to be accurate, you can easily learn wrongly if you get an affirmation on a process or resultset which doesn’t hold in real life, a malicious program can trick you into believing that is the right resultset so monitoring while training is highly essential, probably this is not being done as of today in accurate way.

The 2nd stage starts when you have mastered the capability to learn the resultset of application with 99% accuracy relative to the real world. This is the area where the machine upgrade from being just a learning & application engine to choosing some definite algorithms to stream line the process and patterns. This is particularly interesting as the right algorithm which works 95% of the time can get a sequence of optimised decisions laid out which would profit or provide a return which can be much higher in comparison to other competitive algorithm, so the key here is 1st to get the 1st stage accurate and quickly advance to 2nd stage before prediction models start changing. Bear in mind there would be negative algorithms which would be working just to establish uncertainty in the evaluation process so the key is how to identify and watch out for those algorithms.

The company or person which posses the best ranked algorithm might earn a better return and might profit much more than any other individual. In the future when everything would be done using efficient algo’s the question comes back who owns the best algo.

Seems confusing? Let’s take a very simple example - we all get emails, I get hundreds of emails a day, maybe I don’t want to look at them, I will use my algo to give me the emails which really matter to me.

Let’s say I have 3 kinds of email assistants operating on 3 different algo’s - to read me out the emails which ‘matter to me’ - I would probably go with an algo which gives me the most optimised output which matters to me - 5 emails is good, 4 is better or maybe 10 might be best - well, this is where true AI will decide what really works for me.

This is just a very small example & a short glimpse - the capacity and capability here is humongous… I will cover more examples in my other posts so stay tuned.

Sunday, December 9, 2018

Why voice technology needs to start thinking about blockchain..

Voice technology is somewhat becoming the new realm in the world of emerging patterns to interact with computational systems to obtain results quickly.

Major advantages of voice include - the free interaction without an aid or interaction mechanism primitive to technology, which means - unless you want to talk and do something else at the time via talk, you have nothing to loose.

But the key question as of today is resting on 2 major observations -

1. Voice doesn't have a common framework - unfortunately it's true - we are living in a world of competitor governed architecture and that has a disadvantage in terms of establishing a common framework just suited to voice.

This in turn introduces delay to make change and also restricts changes to happen seamlessly across all the voice supported systems.

2. Currently we are living in a world of insecurity caused by the ease of having someone present themselves as the owner whereas they might not be. Security has become the prime concern and with voice systems the challenge becomes bigger.

Current voice artefacts have the capability to be replicated and processed with variations, consider a program using that to trigger a hack and cause complete disruption of systems. Voice biometrics have a far way to go but more important here is the ledger of trigger of transactions via voice which gives a trust on the transaction invoked.

This is where blockchain comes into picture, making sure each iteration or change in the voice pattern received is ultimately managed via blocks recorded over a period of time.

As this technology is new - it is easy to fundamentally implement this and have a method to enforce this into the architecture but it also means that the major firms have to understand the implications so that adoption is quicker.. it's high time voice starts thinking blockchain..




Sunday, November 11, 2018

Thinks it's high time for a new decentralised social engineering platform...

Since most of the world spends a lot of time and energy on it.. let give it a few minutes and talk about it.

Put in short - the social engines as of today are but a basket of algorithms giving the subject adequate dopamine factor to encourage continued use.

In simple terms this means - algorithms drive the user interaction - giving the user a probabilistic set of outcomes which will allure them to take an action to keep the site ticking. An excellent approach to earn time, which in turns earns money via giving the user a quick dopamine factor.

So in short we trade our time for that short burst of dopamine... and that's it to be very precise.

The positives out of this  - a person with low energy level might feel good post more interaction with the site.

Negatives - the feel good feeling is essentially virtual or non-existent feel factor which gives a short term confidence boost but a long term sub conscious jitter & deep depression symptoms.

On the top of this mental focus is destroyed gradually and there is a huge loss of time.

This is just limited to the amount of time you directly spend on a social engineering site and not indirectly spending on it like for e.g. preparing an artefact to post on the wall.

Now after going through above question is - how to make this useful?

Which goes back to the basic question of reduction of manipulated content.

No we are not talking about content control in existing social engineering frameworks.

We are basically talking about creating a time controlled content with a decentralised artifact control in simple terms applying concepts of block-chain to the social engineering world.

How does this work?

An artefact posted or subject to a behavioural action goes through the content authenticity and confidence score along the timeline of it's existence and that in total defines the position of the artefact in terms of importance on your timeline.

So basically it does 2 things -

1. Reduces erratic behaviour with short term focus on  least useful stuff which in turn improves the time spent versus return factor in direct application to your daily life so at the end of the day we have a significant return for use.
2. It prevents any content manipulated for the current timeline to provoke a subsequent action in favour of a particular participant to be triggered as the the provocation material doesn't in itself constitute the artefact which draws the attention factor but it's composition along the timeline of its existence.

So in short it keeps the algorithms from preventing to manipulate mass behaviour for a group of people using the social engineering platform.

This is an interesting area and there is some research going on - on this but the engines are still heavily dependant on blockchain concepts and artificial intelligence which drives to identify  neural models to associate correct content to have an appropriate score.

So stay tuned for more on this..

Tuesday, September 4, 2018

Why blockchain... why now?..


The primary concept of a block-chain pertains to chain of records(ledger) which captures transactions made in crypto-currency with a primary function to : 

 A) publicly & chronologically record &
 B) decentralize the ledger

In such a system every node or participant in the transaction get a copy of the blockchain. At the end of the day what doesn't change -- is that the transaction still occurs between 2 parties one on the 'initiating' end & another on the 'receiving' end.

Well, the first question which comes to mind is ... why do we need block chain, why introduce more complexity into a system which already quite complex. Think of it, a transaction between two participants happening across 2 different regions of the world can account upto so many sub-ledgers, re-conciliations, or connected pieces being updated, why introduce another element here?

So  - the reason is simple  - 'time is money' - and the question really boils down to the fact - that if - 

a) introducing a block-chain - would it reduce my time to carry out the transaction?
    the answer here is simple - yes - because now to establish authenticity - you don't have to  go through 'n' different channels but look at one record of block-chain and get that answer.

now the point a) above leads to point b) which is if the time is reduced does it lead to the security of the transaction being compromised?

b) making the ledger public means making sure - it is not a single entity which holds control over the authenticity of the ledger but a group of entities & it makes it even more difficult to corrupt or change the ledger by any means during the process of the transaction.

This is similar to the opposite of the old saying that if I keep my system out of cloud - I am more secure, just because nobody is watching me, think on it  because anyone can argue otherwise - if you keep your system in the cloud - the network ensures that the system is handled by the best and the latest to make it secure, so it might be you may be on the losing end if not on the cloud when it comes to a compromise.

So basically at the end of the day - we are achieving a reduced time-frame of the transaction over a more secure channel, seems like a win-win - but there is a cost here - in order to take advantage of the blockchain the accounting needs to be done on a crypto currency - which can be any crypto currency like for e.g. - bitcoin, ethereum, dogecoin, etc.

So we answered the first question - 'Why blockchain', now let's look at the second aspect - 'Why now?'

The whole idea of block-chain comes down to providing the above 2 advantages with an opportunity to make the system more & more - 'decentralised' - which mean's - you wouldn't have to be a major participating body to exercise control or take advantage of this system, you can be a small entity trying to use the system and getting the same benefits which a bigger and established entity would recieve. This idea dwells very well with the concept of modern day technologies - let's take an example - so much of solar power is generated but the power distribution is still not at the peak of consumption at the point of production - meaning where it's needed when it's needed - how to solve the problem - use 'internet of things' devices that talk to each other and let them do the math using a secure channel of distribution - which is block-chain, this gives quick decisioning and make the distribution much quicker.

The one fundamental difference when doing a transaction on such a channel or network is the ability to time travel in history to the point when the record was first created. This gives the authenticity and knowledge about the accuracy and worthiness of the record, so that when the dealing happens - we are sure that the price paid is a fair price.

Well, now that we somehow get an idea of why blockchain & why now? - let's carry on further to understand - how & when aspects?

So as of today market's may be stable or very unstable, markets can be doing perfectly fine in one part of the world at the same time being perfectly erratic in another part, there could be certain governing powers when it comes to establishing the flow of the financial elements from one part or region to another - on one day and completely different on another day - in such an indeterministic world with a trillion complex plausibilites which ones do you trust - well - before I say it - you might have an answer - 'block-chain'.

But primarily the idea of block-chain is to reap on some of the advanced concepts in technology - be- it AI or Internet of things working along with AI where a decisioning can rest on a block-chain record and it would rather take tremendous amount of effort & time for  a highly complex AI cyber-security engine to even try to replicate the chain be it altering it... notifications would follow very quickly.. because as we are growing the speed of thinking with human thinking slowly is being out-powered by the machine - it's important to think of a possibility where your usual data security might not be effective enough... so the question basically boils down to 2 factors - when do we start thinking seriously about it?

... or when is it too late to start thinking about it?





Tuesday, August 28, 2018

Is voice technology... still trying to establish itself?

Is voice technology ... still trying to establish itself?

There is both a philosophical as well as promotional aspect when you consider the voice technologies present today, but a more disheartening fact is a absence of a unified framework.

Unfortunately we are still bound to product driving technology rather than technology driving the product.

Let's not mistake this for a fact that every quality product has got some checks in place to allow for or disable the use of the product in a non efficient fashion, we are not talking about that.

I still firmly believe that each quality product should has their own framework to safeguard the use or extensibility of that brand but where is the common standard?

Well, to be precise - there has always been a common standard - be it the start of the computer age via the DOS, Windows or Unix/Mac OS or via the start of the mobile age via the Android, iOS so above frameworks have made sure that desktop web or mobile app points of access have certain well defined frameworks in place.

Now let's look at voice - as of today, each brand be it apple iOS or google or amazon gives it's own framework when voice in itself has certain commonly used concepts like for e.g. - intent, etc.

So a programmer has to learn or work around each framework and the limitations around it.

This really detriminates  the effectiveness and application of the technology to areas which really work to make a difference.

I see voice typically being used as prime driving source when it comes to environments with limited movability like for example space but unfortunately our systems developed using voice are handling very basic tasks.

It still a long way to go... but the absolute minimum is a common framework...




Wednesday, August 22, 2018

Looking into the future with context aware technologies

Looking into the future with context aware technologies...


Technologies are changing the way we live our lives daily.. ever wondered how context aware technology can help.

Let's take an example - while cars are still preferable means of commute, we use them and travel to crowded places, park the car - go to do our chores and say for example when we return - find that somebody dented our car.  Now while we are away - there is no way to know who did this..

..but wait a second ... that's where IoT comes into picture, your car's co-ordinates are sent back to satellites the moment, the car is stationary, say for example someone was backing up from a car parallel to your's and in that process hit it, now how will you know who it was..

.. let the tech do that for you... how? .. well the moment car is parked, it's co-ordinates are sent along with the co-ordinates of the car next to you.

once a collision happens, the car's next to your's will automatically inform of the incident, if not, then at minimum you have a record of the probable cars which might have hit, so the process of tracking becomes probable and easy.

This is just one scenario and there are many more like these .. but as said we are far far away from building something which works seamlessly and for that - we have to get out of the basic glittery not useful stuff like switching on & off instruments remotely (that's not technology - at least for today's age) but then something like does the light switch on detecting your presence - that's is more modern time stuff ... getting AI to examine and implement the behaviour with confidence for you.


Sunday, July 22, 2018

Are we living in a technologically neolithic age?

When I sit aboard a flight or use a electrically powered mobile phone some times it comes to the mind how far we have advanced in terms of technology.

Flight - Invented in 1903 - more than 100 years in existence, we still use the primitive methods of take off and landing - airspace is still relatively underused.
Electricity - invented in 1879 - more than 100 years in existence still no concept of wireless electricity althought this was discovered by Tesla long back.

Well, my point is - the progress on technology currently is only limited by the fact what big firms or forerunners have to carry in terms of providing a service for a cost. Innovation although going on at a number of places doesn't get the limelight it deserves because we are too dazzled by the fact like a camera on a costly phone taking a better image or posting our a social engine posting our update to the world is more important as a consumer than the very value that a technology advancement can provide.

Everyone talks about security these days but fun fact - a leading top notch security company still asks 10 times for password when a valid user tries to access their engine than analyizing the arrangement of a comprimise wherein this would be a valid security concern (btw I am a huge fan of this firm's products but as a consumer this experience really breaks my respect on the firm's security intelligence). Why can't the system's inherent security echo system identify if the access location changed - are you providing me security or tracking my usage - that's the question to answer here!

User data tracking has become a norm for any technological undertakings - the issue is - with such a large amount of tracking and data 80% of the user's time goes into products offered or provided which don't make real sense to the consumer. 

Sitting around in a house or sitting remote from it and being able to control the switches & devices is not technological advancement - it's just a child's program which should have been provided way back to users. We are still very primal in terms of echo system understanding and decisioning which is making life uncomfortable. Sometimes - I wonder why do I need to switch on lights everytime, can my technology not take care of it, like switch them for me when i am near or preserve electricity by turning on solar or provide a notification when the weather is bad, basically notify me of an event only by understanding it's echo system confidently rather than just sending me notifications which make my day tougher to go through.

Sometimes half baked technology can prove dangerous - and I am sure there will be legal disclaimers to save the technology - but systems which are built to prevent the user having a negative experience while using that technolgy can serve as the exact causes for a person having that experience for e.g. preventing users access to functionality on phone while driving and not providing that access correctly via other channels like speech, a system not able to understand you speaking while driving can be equally dangerous - no sorry but stopping to getting what you need is not a solution here, technology should be able to understand the user correctly - but alas - it's not advanced enough. 

I think some of the major runners and big firms in technology have to take big efforts and steps to achieve a technoligically favourable world - which looks more appeasing to provide than just creating user market and business revenues.. it's not until then - we will be living in a business advanced but technologically neothilithic age. 

Tuesday, May 8, 2018

Key take-aways from the Google I/O 2018 keynote..

As we start the new technological year - google revealed some of key advancements yet to be explored following the Google I/O 2018 keynote.

Below are some which appear to be interesting...


1. AI was the key 

  - some of the fields were key to the AI roadmap like healthcare, although there is much more scope for it to be used in learning behaviour for day to day activities around human interactions.

one nice feature was to recognise a picture with image and words and create pdf even if the picture is not taken in a horizontal plane.

also, AI was being used to help differently abled with a mechanism to communicate more effectively.

2. Voice - understanding the voice and providing mechanisms to re-produce it for the particular user.

- one good feature was the ability to make a phone call for a particular person via voice and then do the task like for example book a reservation for visit on a resort without the user interacting with the voice assistant. 

- companions apps have a better communication mechanism to interact with the assistant.

3. Google News

-was redesigned to provide news experience for the user focussed around the facts like timeline for the news and users choice of news feeds.

-any paid feeds have an inbuilt integration to allow the access without multiple payments across each news channel.

4. Android P - 

- Providing key features like understanding behaviour and use of the phone by the user.
- providing a cleaner home menu - 5  predicted apps.
- providing time spent on each app.
- providing a slider to go to earlier app.
- integration to provide key capabilities with a selection like for e.g. select a video phrase and then watch it on youtube.
-also cool was the feature to keep phone down and mute all the notifications.

5. Machine Learning & Google Maps.

- ML has undergone major changes - which includes adaptive learning to understand different dialects, comprehend the imaging available and then invoking an action based on feedback across the vast neural network google uses. 

Maps have been updated to give directions with added visual interaction to show image of the turn and giving user the exact detail to visually ascertain the direction, it's pretty cool and was much wanted.

The store locations have been updated to show the timings based on the actual day ascertained via robots doing the calling on the store in some cases which is cool again.

6. Google Lens 

Google lens is integrated into a number of devices and now provides a mechanism to capture the photo & convert the image to a text immediately and allow you to interact with it like copy it paste it or share it almost immediately. 

Lens combines the mechanism of seeing and writing in one single shot which gives a bunch of capabilities in this space.


7. Self driving cars

The self driving cars - partnership with WayMo provided a deeper dive into how driving feedback is perceiving to take a decision for a car on the road with things like the understanding the collision course for a probable speeding car running a light to perceiving how people or objects on the roads can be distinguished to accommodate safe driving.

There is more to come with each talk on all the sessions in the next two days ... so stay tuned!


Monday, April 16, 2018

Answer these questions before you opt for a costly & heavy vendor solution..for your IT function..



  • Do you really need every feature provided in the vendor solution?

  • Are you using a mission critical system and vendor is providing quick and accurate resolution?

  • Are your applications requiring complex to very complex logic?

  • How much business risk does the deployed functionality cover, what's the risk factor quotient? 

  • How much risk to cost factor is attained by spending the amount on the solution being considered?

  • Is there an open source solution available for the same functionality, is that scalable for your needs?

  • Down time-line(3-5 yrs) what the returns to investment ratio for a costly vendor solution compared to the open source solution?

  • What is the flexibility to change ratio on the vendor to open source solution?



For most of the organisations except the few 5-10%, the answers are simple & straight forward and so is the solution - aiming to give a flexible option at little to no cost,  the question is - are you spending more time, money and energy on a vendor lock-in which is slowing down the progress of your product in the rapidly evolving world.

Well, firms like Apple took a decision to make sure they have more flexibility and can change quicker by choosing for an in-house product which scales according to company needs (like they opted for their own processor)

It might be time others might want to follow.

Sunday, April 15, 2018

The B word - 'Blockchain'.. and how will this probably apply to medicine & healthcare..

A Block chain in simple words represents source of one single truth.

It can represent a record of human exchange at a particular point in history.

So what is blockchain here?
It's a generalised framework for implementing decentralised compute resources.

In simple terms :

    A blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Constantly growing as ‘completed’ blocks (the most recent transactions) are recorded and added to it in chronological order, it allows market participants to keep track of digital currency transactions without central recordkeeping. Each node (a computer connected to the network) gets a copy of the blockchain, which is downloaded automatically.

Let's understand via the Medrec - Case study done by MIT
  • The block content represents data ownership and viewership permissions shared by members of a private, peer-to-peer network.  
  • Include in the record a cryptographic hash of the record to ensure against tampering therefore indicating the ingregity of the data.
  • Providers can add a new record and patients can authorise sharing of records between providers.
  • Ethereum block chain employs a DNS like implementation which maps an already existing and widely accepted form of ID e.g. name or social security # to the person's Ethereum address. A syncing algorithm handles data exchange "off-chain" between a patient database and provider database after referencing the blockchain to confirm permissions via a database authentication server.
A chain can be composed of the below :

A) Registrar contract

    - name e.g. - sam smith, ethernet address

B) Summary contract
   - sam ethernet address, ppr address

C) Patient Provider relationship `

  -  owner, emr queries, permissions, mining bounties.

A block chain can comprise of a mix of A, B  & C over a period of time.

Sample implementation may compose of the following:

MedRec Service --> DB gatekeeper --> EHR Manager --> Miner -- to mine data on the blockchain

A MedRec service receives a request for data - contact with DB gatekeeper after checking the EHR or  Ethereum manager - getting contract of requestee provider or consumer and then using the Miner to mine the query over the blockchain.

Quoted by the case study as below -- 
'As envisioned by the Precision Medicine Initiative (PMI), the MedRec patient record would reflect the many facets of health data, by accepting not just physician data, but also data from the patient’s Fitbit, Apple HealthKit, 23andMe profile, and more. Patients can build a holistic record of their medical data and authorize others for viewership, such as physicians providing a second opinion or family members and care guardians.'

MedRec smart contract structure can represent one model of Healthcare Care Directory and Resource Location. secured by public key cryptography enabled with crucial properties of provenance and `data integrity. A block chain log provides clarity for communication authorisation across the Health IT ecosystem and an audit for subsequent inquiries.

Some of the worlds best healthcare firms are already started to support the above model to understand the impact and include the Ethereum smart contracts to orchestrate a content access system across separate storage and provider sites.

Block chain will find it's use and continued growth in the usage as the data-security and autonomous data governance comes more into practise. This promises to be be a prime form of data exchange when virtualised access bodies like enhanced artificially intelligent agents or avatars start transacting on behalf of the provider or client in future. Block chain transformation is gradual but might soon pickup speed once the autonomous governance strategy require the same to be the authenticating and authorising entity for any transaction request in timeline history.

Stay tuned for more updates on this ...




Sunday, April 1, 2018

Angular 6 ... what's coming in April ... a look !

Angular 6 would be released with a stable version in this month of April, let's take a look at what it will entail.. 

  • Lazy load template URLs  - adding resource in
  • Node 8 runtime engine support which goes in with TypeScript 2.7

  • More testability - Testability API will have the time-outs function & will add task tracking - this is useful to design around performance and decide which tasks would keep running and which can be aborted - intelligent device ram usage 

  • Want more aggressive optimisations - go for side-affect free flags.

  • Have injectables change the scope in one go -- Tree-shakable provider API updates.

  • Create custom elements based on angular components.

  • Few components will be not available like - animation import from core, tslib 1.9.0 needs to be updated in package.json & template tag is deprecated & replaced by ng-template.

Good news ... angular 6 is completely backward compatible with Angular 5.


Some interesting new features - 

1.
IVY renderer - backward compatible angular renderer which is focussed around speed improvements, size reduction and flexibility.
Read more here ... https://herringtondarkholme.github.io/2018/02/19/angular-ivy/

I like this part ... 'As a platform independent framework, can we run application without platform specific code? The answer is NO, of course. Ivy just inlines DOM Rendere to its core. '

2. Bazel compiler - why build everything when you just changed a speck of it... welcome to the Bezel world.. code fragments which are build have ingrained reachability to understand which component needs to be built or which changed or changed in a way to have it rebuilt. 

3. Closure compiler - what about what didn't change and was never used... Java world has run time garbage collection, ng world is still on its way but has introduced something which eliminates dead code by generating smaller bundles.

4. Component Dev Kit - you can use pre-built components in the angular world instead of building from scratch.. 

wanna know more ... find out here.. 

https://blog.angular.io/a-component-dev-kit-for-angular-9f06e3b4b3b4

5.Service worker - added with stable version of worker.. browser level service cache.

Some other interesting changes are .. 
  • Multiple validators for array method of FormBuilder.
  • Handling string with and without line boundary.
  • Router has enhanced navigationSource and restoredState so as to provide if navigation was triggered.

Thursday, March 22, 2018

Deep learning and the next stage of evolution as we would like to see it..

Let's start with --> Deep Learning and looking at Ray's perspective

According to Ray K, basically artificial intelligence is based around below 2 fundamental axioms:

1) Many layer neural nets
2) Law of accelerated returns.

So in terms you can have neural nets - a) do the job for you & b) do the job better by taking output from one neural net and feeding it as input into another.

Currently as it stands - it is estimated that the intelligence level quotient of a machine stands between simulation of one insect brain to one rat brain (we are still not there yet - hard to believe but it is true!).

But having said that - it's not long enough until machine would be able to simulate not only a rat's brain but one human brain - in another 10 years and nearly all human brains by another 40 years.

This is the growth curve - mainly based on the 2nd axiom of - law of accelerated returns.

Now let's talk about the journey here which is a bit complicated.

Cognitive science has always been the subject of interest to many not from today but from late 1800 and early 1900's, study how subjects think.

There are two things that play a key role to ascertaining a cognitive behaviour and having a correct result  -

1) Context interpretation - which get's derived from the learning plus training.
2) Derivational behaviour - which decisions on an unknown based on confidence established in the known.

The 2) one is a bit difficult to achieve until the machine is fully capable to interpreting the context with close to 98% percent probable correctness.

Let's take a simple example to understand both the above points -

Consider an image wherein a group of people are playing cards in a circle and 2 people are smiling at each other, some are thinking and others are confused.

If i look at the picture as a human - i might be able to gauge that the person next to the player is having glasses which give the card details out to the one person in front, which makes them smile as they know what move to make next, and the others are thinking about what move will be made as they don't get to know the card and some are looking at these people and are confused.

So lot of things happening here - and the human mind is capable of -

a) analysing the emotional quantum of each person
b) obtaining a decision what kind of emotion is conveyed
c) connecting the emotions to find deeper insights on the picture, like the glasses worn by the person.
d) connecting the interpretation of what it means to the person sitting in front.
e) negating the emotion of collective smile as opposed to the single person smiling and then
f) deriving a state out of picture.

So this is just a small example of how simple images can be interpreted correctly - image is nothing but a glimpse of a state at one point in time.

As of today - a) & b) might be possible to some extent but c), d), e) & f) is a whole another gamut which has yet not been explored.

An image of this nature will basically lead to the following label - 'at best' - 'A group of people playing cards' or maybe if not at best - 'A bunch of people sitting on a round table'

Now imagine this being a video instead of a picture which makes it more hard to examine because we have put the time in picture and need a to have a vaster neural net to interpret correctly.

What I just talked about above can be summed up as a mix of context interpretation and trained behaviour analysis.. the 2nd part which is basically talking about derivational behaviour is more around the lines of decisioning for which step 1 is context interpretation, say for e.g. - I there is a  child which learns to walk and go around a path notices some objects and before seeing the object again knows about there being an object at place and takes a decision to change it's course.

Well - what i just talked about above leads to developing architectures in AI which lead to cognitive deep learning.

One such architecture can be based around Bayesian model of probabilistic models of cognition, one of the all time topics of interests in Josh's deep learning approaches.

But in simple terms these can be broken down into the below -

Visual Stream --> Learning --> Training --> Cognitive Analysis --> Behaviour --> Result --> feedback to learning.

Well - we talked about deep learning and cognition and discussed architectures around it.. so why are we doing this.

Let's imagine a simple futuristic scenario where I am interested in simple things like for example  - I have bot which help me with my mails - it goes through my mails and gives me mails of interest which have a strong  intent of action or maybe I have a small robo which adapts to my daily home chores and tries to do the same when I am not there or a program which understands the functional need and derives the best possible code based on the need and probability of that need in the near future.

Well - this are just some very very simple examples of what AI can do for us.. but we still have a long way to go to even get to this level.

In our next topic - we will talk about cops - that's AI governance - and the need for a goverance model, along with singularity and ethical ai.


Tuesday, March 20, 2018

AI evolution and it's outcomes - is it still artificial?

AI evolution - in order to understand this first let us consider what is AI?


Artificial intelligence is the intelligence demonstrated via a machine in contrast to the natural intelligence demonstrated via humans.

I am sure everyone is aware of the experiment where human and computer are judged on their responses to test a true artificial intelligent machine.

Now let's move ahead to understand where AI originated from and where it stands today - 

Neural networks and the basis of the AI calculation and logic have been around from  a long time but AI was initially limited to the decisioning based on available computing power and resources.

Hence the true decisioning with close to human capability was never achieved. With the evolution of connected computing and cloud resources along with quantum computing - AI got a bigger engine to incorporate decisioning logic.

Imagine it to be a Saturn V rocket thruster as compared to a standard economy class car with few horse powers - that is the level of boost that was achieved with the kind of compute power that was available for a network to arrive at a decision. 

Along with this we had the storage capacity restraint earlier which went away with virtually unlimited storage availability. 

Because of above two factors AI grew from an infant which doesn't understand anything and talks only few words to a full blown adult with any decisioning capability. 

Now since we covered the journey - let's look at the situation today - what AI is capable of as of today and near future?

As of today - computers have been able to deduce and defeat humans in some of the brain challenging games like AlphaGo but it doesn't end here. Currently big firms are working on developing software which can work on the central brain and take advantage of the AI capabilities - these come as close to humans and sometimes more accurate than humans to identify and decision.

They follow the simple cycle - 

Train --> Deduce --> Decision --> Feedback --> Retrain --> ... 

..and this cycle continues. Thereby they evolve as entities able to decision better and better, at the expense of their experiences with each decision. This means three things -

1. They think faster.
2. They improve their decision capabilities.
3. They don't have emotional qualms which can draw the decision back at times.

What this means is accuracy and precision coupled with focussed results - if used correctly - giving a big gain or profit to human kind, but then it comes at the expense of humans following what the computer predicts which is a bit tricky because once the system understands the 'good' and the 'bad' and then it starts to understand the dependency curve -- 

-- system needs to be good to decision and serve humans better 

where does the thin line of taking a decision for it's betterment to serve humans better goes over a decision to not serve some humans in the best way -- this is like a human thinking about itself - that is where it starts to get a bit muddy.

What should be done to make sure that above doesn't happen? 

Currently humans have an oversight of what is the result of a decision by AI or neural network but don't have any vision of how this decision was achieved - in order to have this - humans should also become by any way possible a part of that decisioning and feedback process. 

This can be achievable by many means - one is to plant a chip to provide that feedback to humans - another can be to install the human thought decisioning on the AI as an end-result.. this fine line is necessary so that the balance is still maintained... 

Currently with the speed at which neural processing is changing - it's going to be very quickly exceeding the human decisioning capability, hence above measures should be good as a stop cork on anything which might go in harms way.


Sunday, March 18, 2018

How to take advantage of readily available API's with AI capabilities

Folks - in this article we would explore some very useful API's which some major companies investing in AI are providing. 

Earlier post was in relation to establishing our own custom made AI & deep learning code base, this tutorial is more on understanding well established concepts in form of API's which are available to be used directly in the code. 

Let's go through them - this post is covering the google apis - one by one - 

1) Vision - check the images and get information about it.
2) Speech - provides automated responses based on context - response guessing.
3) Video - provide search capabilities within a video.
4) Natural Language - provides to understand the written feedback based on the context in which this was triggered.
5) Translation - provide a translation in relation to the context where you are converting to. 


Initially if you just need to check them out and do some test trials - these are free but if you are using them heavily google does charge you for them, but good news - you will get some credit first time you start to use them.


How does it work?

Firebase hosting --> Cloud Storage --> Cloud Functions 

                          --> API (in question) (e.g. - Vision API) --> Firebase Database

1) Vision or Image API is really cool - in the sense it scans an image and will provide the details whether this image is of a landmark or person, if person what is the mood, etc.

for e.g. - json response from an image api might look like - 



{ "responses": [ { "faceAnnotations": [ { .... { "type": "RIGHT_OF_LEFT_EYEBROW", "position": { "x": 965.15735, "y": 349.91434, "z": -7.9691405 } }...
            {
              "type": "UPPER_LIP",
              "position": {
                "x": 960.88947,
                "y": 382.35114,
                "z": -15.794773
              }
         
            ....
            }
          ],
          "rollAngle": 16.3792967,
          "panAngle": -29.3338267,
          "tiltAngle": 4.45867656,
          "detectionConfidence": 0.980691,
          "landmarkingConfidence": 0.57905465,
          "joyLikelihood": "VERY_LIKELY",
          "sorrowLikelihood": "VERY_UNLIKELY",
          "angerLikelihood": "VERY_UNLIKELY",
          "surpriseLikelihood": "VERY_UNLIKELY",
          "underExposedLikelihood": "VERY_UNLIKELY",
          "blurredLikelihood": "VERY_UNLIKELY",
          "headwearLikelihood": "VERY_UNLIKELY"
        }
      ]
    }
  ]
}

So with this API you can get somewhat details about features or mood of the person, if a landmark or building you can get the details of the picture - where it is and likelihood of it being correct.

2.Video API - 

This API can search within a video a particular frame which has that element in it - like for e.g. if there is a cricket match and you want to know when was a catch taken - you could just enter the details of the catch and it will tell in all videos at which point you will see the catch so you can jump to those locations instead of going through the whole video.

Sample response for label detection - 


 "segmentLabelAnnotations1": [
              {
                "entity": {
                  "entityId": "/m/01yrx",
                  "languageCode": "en-US"
                },
                "segments1": [
                  {
                    "segment": {
                      "startTimeOffset": "0s",
                      "endTimeOffset": "14.833664s"
                    },

3. Speech API/Translation API/Natural language processing API

Speech api is used in combination with translation api & natural language processing api. 

So for e.g. if someone sends a message text saying 'does this time suit you - 9 AM?' - the natural language api will understand the context and give a framed response option like for e.g. --> 'yes - it suits me' or 'no - it doesn't, i can suggest another time', the speech API gives you an option to record a response and translate that to text in terms of the context and translation will allow you to translate to another language - pretty cool, no need for interpreter or translator and response engine ! 


Sample response from natural language API -
  "sentences": [
    {
      "text": {
        "content": "Four score and seven years ago our fathers brought forth
        on this continent a new nation, conceived in liberty and dedicated to
        the proposition that all men are created equal.",
        "beginOffset": 0
      },
      "sentiment": {
        "magnitude": 0.8,
        "score": 0.8
PS - referred from google tutorials.

See the score and magnitude - this sets the tone of the text


How to get them installed and working?

You need to have the below in order to get the API's to work - 

Set up node js

  nvm install stable

run nvm 

  nvm

 nvm alias default stable

install yarn 

   curl -o- -L https://yarnpkg.com/install.sh | bash

  yarn --version

  yarn add express

deploy app to gcloud 

  gcloud app deploy


PS: Most of examples are taken from google api docs but shortened for quick illustration.

Tuesday, March 6, 2018

A small explanation of Machine learning & trend analysis.. using TensorFlow

We will understand machine learning by looking at the most common api for ML - tensorFlow.

Machine Learning is trying to make the machine understand how to interpret trends and provide close to accurate results as would a human mind do. 

TensorFlow is a tool which helps doing just that and works with python, java, c++, javascript and some other languanges.

Tensor means : 

A mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.


Before we go into the specifics, let's understand how to install and run tensorFlow on a MacOS box.

You need to have Phython installed before we start installing TensorFlow.

Step 1: Install pip and virtualenv

sudo easy_install pip
pip install --upgrade virtualenv


Step 2: Create a target directory for e.g.: 'tens-flow' and establish a virtualenv

virtualenv --system-site-packages tens-flow # for Python 2.7

Step 3: Activate the virtualenv
$ cd tens-flow
$ source ./bin/activate  
Prompt will change to the following -->
(tens-flow)$

Step 4: Start easy_install using pip

(tens-flow)$easy_install -U pip

Step 5: Install tensor flow

(tens-flow)$ pip install --upgrade tensorflow # for Python 2.7


Validate Installation - by the below steps 
$ cd tens-flow
$ source ./bin/activate  

Prompt should change to -
(tens-flow)$

After working and completion you can deactivate the tensorFlow 
environment by using below command.
(tens-flow)$ deactivate 

Okay so now your tensorFlow environment is up and running.
Let's check via simple tensorFlow heartbeat test. 
Type in the following commands and phython program

$ python

# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))


Output should be : 
Hello, TensorFlow!

If above is true - we are all set!

Now let's continue with machine learning --> Normal data acquisition 
and analysis process:

Raw data collected from real world --> Data is processed --> Clean Data

Clean Data can be fed to --> Exploratory Data for Analysis or Machine Learning Algorithms, Statistical Models or Sent to communicate visualisations 

Machine Learning Alogrithms can then be used to --> Build a data product


Okay so TensorFlow works on the below principles - 

First the graph is constructed

Training is done using the input variables

Estimator is nothing but a form of basic linear regressor model, let's break it down. 

These are steps for any model.
Estimator : train() --> evaluate() --> predict() --> export_savemodel() 
: => Checkpoint.

You need to first train, then evaluate, followed by predict and then 
save a checkpoint state. 

So this workflow evaluates to a checkpoint which is helpful to 
synchronise in a distributed system when they restart.

So let's take a simple example - 

Train phase: 

For e.g. an algorithm which identifies apple types by looking at the 
image or characteristics. 

Let's limit this to 3 types for now - 
  • Fuji, Golden Delicious & Granny Smith
First we develop a simple script which takes three inputs, say images of apples, we need to train this - so how to do that --> 

Check the color of apple - if apple = red , 95% probability it's 'Fuji'

if 'Green' --> it might be 'Granny Smith' more likely.

So with each characteristic we have a set of values which arrives at a result of the image being - how close to accurate - for e.g. -->

Once we start evaluation - we compare probabilities for output with each set of data. 

Predict phase: 

Once evaluation is complete - we can predict the result. Obviously once prediction is close to 100% - the checkpoint is established and saved,

Apple A = can be either

Granny Smith - 99%
Fuji  - 0.01%
Golden Delicious - 0.09%

So tensorFlow helps to do just that! Further it can be used for A/B testing or predicting trends.

You can refer to sample examples on google site to understand the basic tensorFlow and ML training algorithms.

https://opensource.google.com/projects/tensorflow

PS: I have referred to Google tutorials and some videos by experts to give the above picture.

We will discuss about APIs in my next blog and a more interesting tensorFlow example. We will also talk about other models for prediction, stay tuned.