Today almost everyone agrees and is able to comprehend the fact that the way top social sites are designed was to promote the user to spend more time within the site in order to achieve the goal of more hits to their usage.
Data is a key player when it comes to building something for a consumer & if you want to make sure that the consumer always contributes to a fair percentage of the sale, then getting the right data is very important. Primary objective behind the above goal was to get as much data possible from the user & behind the scenes create an image of the user, like an avatar which has all the characteristics of the user and then experiment with the avatar for probabilistically favourable outcomes.
The means was simple, give the user something they would want so as to stay in the site - can be an image, a notification on a status update of a friend, relative or family member or a topic sensitive to the user at hand. Slowly it started getting out of control, which now leads to groups influencing decisions on individuals and minds.
The point about data leaking or being leaked is irrelevant as any data on the sites would be next to impossible 'to not leak' and still carry on with the incentive for user to come back without rapidly decreasing productivity.
The question is what needs to be done to make sure the social sites have sanity on the content created, & promoted within the realms of the social engine.
This is where machine learning and intelligence can help if used the right way but the fundamental architecture needs to change.
A simple paradigm is -- make it easy -- humans would use an interface to express their thoughts & perspectives and it is upto that interface to present them with the best possible way to do it.
Human uses an 'interface' --- here which is an 'intelligent assistant' and tell what they want to publish to the assistant -
For e.g. - I want to publish 10 photos, occasion is birthday party, post is to my close friends & title should be something exclamatory & energetic.
The 'assistant' looks at the content and gives back the suggestions on proposed layout and presentation, user either chooses it or makes adjustments to make sure it matches their creative insight behind the post.
Well, a few things -
1) Make the job of posting easy.
2) Reduces work of trying to find best combinations.
3) Goes behind the scenes and checks the content for security before considering it as eligible for posting.
4) Gives back to the engine feeds on creative redesign - this is the most interesting part - as human & machine minds are working together to create something new.
5)Also, encodes the communication via historic timeline using blockchain or crypto factor.
So, this is where the job would be much more simplified..
As of today, major social engineering sites are working to have behind the scenes machine learning algorithms scan content for predominently below key factors to make sure that the overall health factor and toxicity of communication doesn't degrade.
They monitor the below key points -
Data is a key player when it comes to building something for a consumer & if you want to make sure that the consumer always contributes to a fair percentage of the sale, then getting the right data is very important. Primary objective behind the above goal was to get as much data possible from the user & behind the scenes create an image of the user, like an avatar which has all the characteristics of the user and then experiment with the avatar for probabilistically favourable outcomes.
The means was simple, give the user something they would want so as to stay in the site - can be an image, a notification on a status update of a friend, relative or family member or a topic sensitive to the user at hand. Slowly it started getting out of control, which now leads to groups influencing decisions on individuals and minds.
The point about data leaking or being leaked is irrelevant as any data on the sites would be next to impossible 'to not leak' and still carry on with the incentive for user to come back without rapidly decreasing productivity.
The question is what needs to be done to make sure the social sites have sanity on the content created, & promoted within the realms of the social engine.
This is where machine learning and intelligence can help if used the right way but the fundamental architecture needs to change.
A simple paradigm is -- make it easy -- humans would use an interface to express their thoughts & perspectives and it is upto that interface to present them with the best possible way to do it.
Okay now you are confused what I am talking about... let me explain...
Human uses an 'interface' --- here which is an 'intelligent assistant' and tell what they want to publish to the assistant -
For e.g. - I want to publish 10 photos, occasion is birthday party, post is to my close friends & title should be something exclamatory & energetic.
The 'assistant' looks at the content and gives back the suggestions on proposed layout and presentation, user either chooses it or makes adjustments to make sure it matches their creative insight behind the post.
Now what does this accomplish?
Well, a few things -
1) Make the job of posting easy.
2) Reduces work of trying to find best combinations.
3) Goes behind the scenes and checks the content for security before considering it as eligible for posting.
4) Gives back to the engine feeds on creative redesign - this is the most interesting part - as human & machine minds are working together to create something new.
5)Also, encodes the communication via historic timeline using blockchain or crypto factor.
So, this is where the job would be much more simplified..
As of today, major social engineering sites are working to have behind the scenes machine learning algorithms scan content for predominently below key factors to make sure that the overall health factor and toxicity of communication doesn't degrade.
They monitor the below key points -
- Shared attention - how much?
- Shared reality - what kind?
- Receptivity - how much people are liking it?
- Variety of perspective - uniqueness factor, how much?
Then there are other facts like seeing trends or patterns, classifying communications as machine vs human generated and watching the trend of generation & propagation based on events, timeline & geographical regions where the communication initiated and followed thereafter.
These are like watchdogs which are already being pushed into the system to monitor, capture & notify when a given communication or channel poses to be an imminent problem.
Still I think there needs to be a lot done to make sure social sites are - safe, promote a healthy conversation and are not tools used by an exploiter to get information for money.
Also, a well engineered social platform would in turn incentivise the customer to give back the time spent on it via either a return program for any kind of monetary value or score so that the user gives a conscious effort to promote the right content.
No comments:
Post a Comment