How to design a social network that people will trust

How to design a social network that people will trust

Unless you are totally cut off from reading or viewing any news, you would have heard all the ruckus about lack of trust with social networks and fake news propagating and influencing people’s thoughts and minds. Social networks: can’t live with them and can’t live without them. Communication and sharing are social traits embedded within humans and though the medium might change from paper to the internet, the desire to bond and share will always be the same. The issue with social networks is that the sharing is amplified and the concept of “liking” something has taken a whole different turn. The algorithms used in social networks are always looking to customize your feed so that you stay on the network longer and share/ consume their feed. The longer you stay, the better they can customize their product for you. More customization is a highly desired product for targeted advertising and brings in more revenue to the platform. Facebook Q4, 2018 net income totaled $6.88 billion and almost 85% of its money comes from advertising.

As of Jan 25, 2020, the top 5 social media sites are Facebook, YouTube, WhatsApp, Facebook Messenger, and WeChat. I bet within a year you should see TikTok somewhere in that list too.

The root of the social networks is based on retaining as much information about you on their network and providing highly targeted advertising. The networks open up their information to advertisers and also third-party apps that would help retain you in their platform. The famous example is the partnership between Facebook and Zynga where Facebook provided Zynga with access to the social network’s 2+ billion users till mid-2016 (Facebook and Zynga partnership end). The 3rd party access became scandalous when the Cambridge Analytica – Facebook data partnership became a major political scandal in early 2018 (Facebook played a role in the Cambridge Analytica scandal). Cambridge Analytica used an app to harvest data from Facebook users. However, Facebook’s design allowed this app not only to collect the personal information of people who agreed to take the survey, but also the personal information of all the people in those users’ Facebook social network. Facebook breached data protection laws by failing to keep users’ personal information secure, allowing Cambridge Analytica to harvest the data of up to 87 million people without their consent worldwide.

Data leakage and other controversies are not just limited to Facebook but there have been numerous issues with other platforms too. YouTube had its fair share of controversies and one of the prolific ones is the dark side of the YouTube recommendation engine. As explained before, YouTube has a vested interest in retaining you on their platform, for as long as possible. The YouTube recommendation engine reviews your liked videos, most watched videos, and your subscribed videos to create a list of videos that you would like to watch. The recommendation engine based on AI sometimes a dark turn while trying to retain user interest (YouTube’s algorithm has a dark side). The blame does not lie with the software alone but also has to do with the keywords and tags that a content uploader will manipulate to generate more views. For example, the user interested in nature or wildlife might view a couple of videos on animals from different parts of the world. Suddenly the user might see a climate change denial video pop up in his feed. The video might have tags such as wildlife and or climate in the tags. If the user does click the video once then there are two ways the recommendation gets corrupted
1) The algorithm now assumes that any user who has a similar interest in wildlife might also be interested to watch the climate-change video
2) The user’s feed would now include more climate change videos based on his interest in clicking the one recommended video. The algorithm assumes a higher success rate and will customize more videos that have taken a dark turn now

Do we now blame the user or the algorithm (Toxic algorithm of YouTube)?. These are hard questions but the underlying issue is that there has been a lack of trust about social networks (Low trust in social media). Only 41 percent of people say they trust social media. In the U.S., it’s just 30 percent. More than 60 percent wanted governments to regulate social media better. According to Gallup, the majority of U.S. adults say that in recent years they have personally lost trust in the media (Indicators of news media trust). Consistent with the trend toward declining trust, 69% of U.S. adults in the current survey say their trust in the news media has decreased in the past decade,” the report states. “Just 4% say their trust has increased, while 26% indicate their trust has not changed.”

While many articles look at the news in terms of impact for brands and advertising, we will look at the impact from a user perspective and the alternatives that the user can choose to share and communicate. The alternative to social networks is private network platforms. Private networks allow the user to restrict the audience and visibility of the content. The biggest example in this category is WhatsApp (WhatsApp is the most popular messenger app). While WhatsApp is primarily a private messaging platform, the content shared has branched out from private messages to sharing personal and family photos and even videos. In my case, WhatsApp is the primary platform to share content with friends and family. I also primarily use Slack for all business and work purposes to communicate and share content.  Slack is essentially a cloud-based communication platform for small businesses or large enterprises.

Private networks are not impervious to a data breach or spreading of false information/ fake news either. Recent news on the fake news impact from WhatsApp has affected Brazil and India (WhatsApp fake news issue in Brazil,  and WhatsApp fake news issue in India). WhatsApp and other messaging apps also are susceptible to spyware attacks that would read out all their private messages (Spyware in WhatsApp). While network effects and fake news issues do exist within private networks, they still provide the best medium to control information sharing for the user.

How do we design a private network that people will trust and at the same time generate revenue? We need to follow in the footsteps of established networks like WhatsApp and then add revenue generation and stricter policies to prevent data leak and fake news issues.

High Level Design

We will start with WhatsApp as a default platform. The advantage of the WhatsApp platform is that we can create groups and share messages and media content from your phone. We assume that we have almost all features similar to WhatsApp in our new app. The features that we will extend are :

  1. Natural Language Processing Algorithms: These algorithms scan for links shared within the platform and checks these links against a database of links and media that are classified as fake news. If the link shared gets a positive hit on the database then the app automatically generates a text box with  “The information shared is flagged as unreliable”. How do we design the backend database with the media links? This brings us to point 2
  2. News Integrity Initiative: Work with the News Integrity Initiative whose mission is to advance news literacy, to increase trust in journalism around the world and to better inform the public conversation. Partner with Org such as UK fact-checkers to identify fake news sources. We should also be a self-driven initiative to use AI to identify and tag unreliable links (Initiative to identify fake news). At a high level, the algorithm would continuously scour media links and use the NLP engine to identify the sources and compare against major sources identified by the News integrity initiative and other partners. The algorithm will also use predictive analytics and sensational words to discover and flag fake news headlines. After tagging the links, the algorithm will also use the end-user identified feedback loop to update the learnings. For example, if a link shared is identified as unreliable, the user is prompted to provide feedback on the link. While the feedback itself might not be trusted, the feedback can be analyzed to check if there are false positives. The algorithm can then be adjusted based on the feedback.
  3. End to end encryption: All messages will be securely encrypted. Messages in chats would use client-client encryption and the messages will also be stored encrypted in our cloud servers. Secure encryption of data would provide a high level of trust score from the users for the app.
  4. Search Capability: Good search capability to search across messages and media shared. The app should provide clear search results with the search results shown across the different groups or individual contacts.

The app designed with security and privacy initiatives would increase the trustworthiness of the app and satisfy the users. What are the issues and disadvantages that we anticipate with this model?

  1. Isolation: Private networks by default have an invite-only method of joining different groups. This has the potential to lead to group-specific thinking and isolation to arise. For example with social networks such as Facebook, there are avenues to be exposed to a different news outlet and public groups whereas in a private network there is no way to know which groups are available.  How do we solve this problem in our design?
    1. Suggested social lists: The admins of individual groups should be provided an option to make the group list public. One incentive for the admin to make a list public would be to have an option to have a higher number of users to join the list than a private list. These public lists will be similar to the Facebook groups and will be shown automatically to the user on a separate Groups menu. The suggested groups will be based on the different factors such as groups nearby and already subscribed groups
  2. Business Model: Business exists to make a profit and that is the fundamental truth about capitalism. We need to look at methods to generate revenue without losing the user’s trust.
    1. Targeted Ads: One method of generating revenue is to leverage NLP to add targeted text-based ads to the app (an example is shown below). These ads are highly targeted and extremely powerful for cost per action (CPA). Integrated ads keep the attention of the user on the message stream and don’t distract the user.
    2. Customer support via channels: We allow the business to register and create channels to provide private customer service to the users. These channels are public and accessible to any user. We will charge the business a monthly fee to continue access to the channel
    3. Data sharing with partner apps: Another targeted source of revenue generation is to partner with other apps to provide value-add service to the user. Examples include in-app shopping and payments without leaving WhatsApp. The data sharing needs to be highly regulated and reviewed periodically to prevent data leakage incidents

Technology vector created by freepik -

An app designed and executed with the features listed would satisfy all parties. The user is happy that he or she gets to use a private social network with added protections for preventing unreliable information and news to spread via the app. We are happy with the revenue generation mechanism and keeping our investors happy. The privacy experts are happy with our end to end encryption model. While this might not lead to a perfect model we would at the minimum increased the root of trust and thus designed a social network that people will trust.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments