Vanish Mode For Messenger and Instagram Chats

Only a week after its WhatsApp announcement for the “disappearing messages” feature, Facebook has announced “Vanish Mode” for Messenger and Instagram to allow the sending of messages that disappear automatically.

Privacy and Secrecy

Facebook says that the Vanishing Messages feature is particularly useful if there is “something you want to say in the moment without worrying about it sticking around” in the chat history.  As with (Facebook’s) WhatsApp disappearing messages, and based on protecting privacy, vanishing messages is pitched as a way of lightening the mood and allowing users to live in the moment without fear of being reminded of what they said forevermore.  This may be a feature that is particularly attractive and useful to younger people, thereby reflecting Facebook’s desired target users.

How It Works

To operate Vanish Mode in Messenger, which is an optional feature, users need to swipe up on their mobile device in an existing chat thread.  This activates Vanish Mode. Swiping up again returns the user to normal chat mode.

Safety and Choice

Facebook says that as well as the opt-in aspect and the simple swipe activate/de-activate giving the user choice, the feature includes safety elements such as only being operational with people the user is connected to, a notification being issued if someone takes a screenshot while a user is in Vanish Mode, and the ability to block someone and report a conversation if the user feels unsafe.


Vanishing Messages is first being rolled out on Messenger in the US and in “a handful of other countries” and is “coming soon” to other places. Facebook says that Vanish Mode for Instagram is also coming soon (at an unspecified date).

What Does This Mean For Your Business?

Facebook is on the offensive with its chat platforms and, in keeping with its predicted strategy of furthering the integration and the interoperability of WhatsApp, Instagram and Messenger, the opt-in Vanishing Messages feature is now being rolled out.  This feature is also another step in Facebook keeping its pledge on improving privacy, a matter which it has suffered a lot of very bad publicity about in the years following the Cambridge Analytica scandal.  Facebook is using this time of physical distancing of users to leverage features that may encourage them to make more use of its chat and social media platforms, thereby helping Facebook to compete with rival apps and to re-position its chat apps and social media platforms as lighter and more private.

Are You Being Tracked By WhatsApp Apps?

A recent Business Insider Report has highlighted how third-party apps may be exposing some data and details of the activity of WhatsApp users.

WhatsApp – Known For Encryption

Facebook-owned WhatsApp is known for its end-to-end encryption.  This means that only the sender and receiver can read the message between them. 

In addition to being convenient, free, and widely used in business, the secure encryption that users also value has even been a target for concerned governments (including the UK’s) who have campaigned for a ‘back door’ to be built-in in order to allow at least some security monitoring.

Able To Exploit Online Signalling

If the Business Insider revelations are correct, however, third-party apps may already be making the usage of WhatsApp less secure than users may think.  The business news website has reported that third-party apps may be able to use WhatsApp’s online signalling feature to enable monitoring of the digital habits of anyone using WhatsApp without their knowledge or consent.  This could include tracking who users are talking to, when they are using their devices and even when they are sleeping.

Shoulder Surfing

Back in April, there were also media reports that hackers may be potentially able to use ‘shoulder surfing’ (spying in close proximity to another phone) and the knowledge of a user’s phone number to obtain an account restoration code from WhatsApp which could allow a user’s WhatsApp account to be compromised.

Numbers In Google Search Results

Also, back in June, an Indian researcher highlighted how WhatsApp’s ‘Click to Chat’ feature may not hide a user’s phone number in the link and that looking up “” in Google, at the time, allegedly revealed 300,000 users’ phone numbers through public Google search results.

Other Reports

Other reports questioning how secure the data of WhatsApp users really is have also focused on how, although messages may be secure between users in real-time, backups stored on a device or in the cloud may not be under WhatsApp’s end-to-end encryption protection.  Also, although WhatsApp only stores undelivered messages for 30 days, some security commentators have highlighted how the WhatsApp message log could be accessed through the chat backups that are sometimes saved to a user’s phone.


One potential signal that WhatsApp may not be as secure as users may think could be the fact that, back in February, The European Commission asked staff to use the SIgnal app instead of WhatsApp.

What Does This Mean For Your Business?

As may reasonably be expected from a widely used and free app owned by a tech giant, yes, WhatsApp collects usage and some other data from users, but it does still have end-to-end encryption. There are undoubtedly ways in which aspects of security around the app and its use could be compromised but for most business users it has become a very useful and practical business tool that has become central to how they regularly communicate with each other.  At the current time, WhatsApp’s owner, Facebook, appears to be busy concentrating much of its effort on competing with Zoom and on promoting its own desktop Messenger app free group video calls and chats that it has just launched. Even though WhatsApp is coming in for some criticism over possible security problems, the plan is still to grow the app, add more features, and keep it current and competitive which is why it recently ceased support for old operating systems.

Influencers Paid To Promote NHS Test and Trace

In a bid to raise awareness of responsible behaviour concerning COVID-19 among the younger age groups, the UK government is reported to be paying freelance social media influencers and reality TV stars to promote test and trace.

Test and Trace

Test and trace in the UK is branded as NHS but is actually outsourced to private companies and uses a network of commercial testing labs, drive-through centres, and call centres.  The idea of the service, in the absence of an effective app (the UK’s app trialled in the Isle of Wight and failed after it didn’t work on Apple devices and £11 million had been spent), is to enable the identification and contacting of people who may have been unknowingly in close contact with a COVID-positive person e.g. in a restaurant.


Even though government schemes (e.g. eat out to help out and other messages) have promoted a return to restaurants and other hospitality businesses, the current narrative focuses on young people as mostly potential asymptomatic spreaders who may not be as concerned about the impact of their behaviour on the wider population.  As such, getting the message to them that they must get tests if they have symptoms and self-isolate if contacted are deemed to be especially important.  Other challenges include the fact that the test and trace service is also reported to be failing to deliver, there appears to be a reluctance among many people to share their contact details, and there is a growing weariness of and dislike pandemic restrictions being imposed, changed and re-imposed.

Freelance Influencers and Reality TV Stars

Younger age groups that have grown up with social media and reality TV are known to be susceptible to messages by social media influencers and reality TV stars.  This is believed to be because:

– Social media influencers are perceived as being more like their audience, sharing more of their experiences and therefore more ‘authentic’ (perhaps unlike more guarded celebrity behaviour), often encouraging engagement with social issues (adding to their credibility) and being able to forge a stronger more engaging direct relationship with followers i.e. they are trusted, and they are young.

– Reality TV stars are perceived to be ‘ordinary’ (just like their audience), they are open, spontaneous and outspoken (like their young audience) and they appear familiar and almost friend-like to their young admirers i.e. there is a perceived relationship with the star.

– Social media influencers have massive reach.  Individual influencers can have millions of followers.


Social media influencers and reality TV stars have a proven record of boosting sales of products in, for example, the fashion and beauty industries through their endorsements.

Who and How Much?

The UK government is reported to have enlisted the services of Love Island stars Shaughna Phillips, Josh Denzel and Chris Hughes.  It is likely that a social media influencer with a large following could be paid thousands of pounds for a single post.

What Does This Mean For Your Business?

Businesses in the beauty and fashion industries know how important reviews and endorsements for influencers and reality stars can be in boosting brand-power and sales.  Many different businesses also know how difficult it can be to effectively reach younger audiences in a cost-effective way.  It makes sense, therefore, that influencers who can promote test and trace among the young in a positive way and in a way that stresses its ease, convenience and social responsibility is likely to be a good tactic. Businesses in the hospitality sector, for example, have been particularly affected by pandemic restrictions and they are likely to support any intelligent moves to make going out much safer.

Aside from its promotion, however, questions are still being asked about how far people are having to travel to even get a test, how well the test and trace service is operating, where bottlenecks are, and how accurate capacity and testing figures are.

Featured Article – Ensure What You See Is Real

With the internet and particularly social media awash with pictures, videos, news, and other content which may not be all that it seems, here is a guide to spotting fake news and content.


There are a number of sources and motivations for publishing content, misleading stories/misinformation/disinformation/’fake news’, photos and videos that are not what they appear to be.  For example:

– Scams.  Cybercriminals produce, publish, and send a wide variety of fake content to trick people into parting with personal details, money, or both.

– Jokes and pranks. Manipulated videos, photos and stories featuring a well-known figure (often political) are frequently circulated as a joke or to ridicule that person.

– Politics. This is a huge area of concern for governments as disinformation published by actors for opposition parties or foreign powers can be (and have been) published on social media to influence voters and election outcomes.  This was found to have been the case when the details of 87 million Facebook users (mostly in the US) were shared with Cambridge Analytica and used target people with political messages in relation to the 2016 US presidential election and the UK referendum.  This is why Facebook has recently announced that it is banning deepfakes and “all types of manipulated media” ahead of the 59th US presidential election scheduled for Tuesday, November 3, 2020.

– Conspiracy theories. There is no shortage of disinformation and misinformation being spread and shared by conspiracy theorists.  This has been particularly apparent about the coronavirus.  For example, in February, the director-general of the World Health Organization said that COVID-19 was not the only public health emergency the world was facing, but that the world was also suffering from an “infodemic” of fake medical news where “Fake news spreads faster and more easily than this virus” and that it is “just as dangerous.”

– Inside information. Sometimes, stories and pictures appear from those who seem to have a story from an apparently legitimate source inside a company, warzone/disaster area, institution, or other newsworthy places where apparently shocking and new photos and details are produced to influence opinion. These sometimes need to be treated with caution. 

– Networks of fake accounts.  Networks of social media accounts can be set up using fake profile pictures and stolen/manipulated identity details to help disseminate, create a buzz about, and add credibility to misinformation and fake content.

Artificial Intelligence

One new twist to the creation of fake pictures and videos has come with the use of AI to make much more convincing photos and videos.  With ‘deepfake’ videos, for example, people can be made to appear to say things that they have never said.  For example, multinational IT security company ‘Trend Micro’ highlighted the threat of cybercriminals making and posting (or threatening to post) malicious ‘deep fake’ videos online in order to cause damage to reputations and/or to extract ransoms from their target victims. Also, for example, in March this year, a group of hackers were able to use AI software to mimic (create a deep fake) of an energy company CEO’s voice to successfully steal £201,000.

Social media analytics company Graphika, which offers a “Disinformation and Cyber Security” service, recently reported identifying images of faces for social media profiles that appear to have been faked using machine learning for the purpose of China-based anti-U.S. government campaigns.  Graphika reported that the China-based network, dubbed “Spamouflage Dragon”, had used English-language content videos and AI-generated profile pictures that appear to have been made by using Generative Adversarial Networks (GAN).  This is a class of machine-learning frameworks that allows computers to generate synthetic photographs of people.


Social media companies now use fact-checking companies to help them identify, label, and remove fake news.  For example, Facebook has had a working relationship with ‘Full Fact’ since 2016 and works with fact-checkers in more than 20 countries. In 2019, Facebook announced that for the UK, ‘Full Fact’ would be reviewing stories, images, and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.


Sharing is the crucial element of disseminating misinformation and fake pictures and videos on social media. Stories, photos, and videos need to generate strong feelings, tap into beliefs and prejudices, and be engaging or shocking to stand out and be shared by users. 


If you would like to check content such as photos, videos, and news stories yourself to see how reliable they are, here are some of the main methods you could try.

Reverse Image Search on Google

If you would like to check the validity of a photo or where it may have first appeared, one method is to use Google’s reverse image search. This feature allows you to use a picture to find related images from the web. Here are instructions for using reverse image search:

Reverse searches for images and videos also work on other platforms such as Yandex (Russian). Here you can take screengrabs of footage and reverse search them to find out where they have been posted before

Geolocation of Photos or Videos

A closer study of a photo can often reveal text (signs, language, brands) and features such as well-known buildings to help decide where in the world (and when) a photo may actually have been taken.

Fact-Checking Websites

There are many fact-checking websites to help you decide whether dubious-sounding information is true or not.  Examples include Full Fact (the UK’s independent fact-checking organisation), (U.S. based), AP Fact Check, Associated PressPolitifactSnopes and many more.

Checking For Media Bias

Understanding that newspapers and news sites represent the views of their owners and therefore are biased towards them is the key to understanding how much of story may be true.  Humans dislike cognitive dissonance and tend to read stories that are in broad agreement with their beliefs and attitudes.  Having said that, media bias may be very apparent in some stories.

Using websites such as Allsides you can decide for yourself about different aspects of media bias.

Trusting Instincts

If stories, photos, or videos appear unbelievable, it may be that they are, in fact, false and may require your own investigation before sharing.  If stories create very strong feelings, they may have been written, filmed, photographed and/or manipulated in order to generate an emotional response which may increase the chance of media being shared quickly without thought.

When it comes to reports or videos of, for example, a certain political figure saying something, judging this by what you know them to have said in the past, their beliefs, and their known relationship with the truth (we live in a so-called “post-truth” era), may help to give a feeling about whether or not something is true.

Spelling and Grammar

Many errors in spelling and grammar and awkward use of language in videos and text can be signs that a piece of content is fake.  For example, the videos created by the China-based “Spamouflage Dragon” group were recently identified as fake due to being clumsily made with language errors and automated voice-overs.

Looking Ahead

The addition of AI to the mix, the impending U.S. election, the fractious relationship between the super-powers, and the fallout from the handling of the coronavirus by certain countries, Brexit and the unfolding big stories of the day, misinformation and fake photos/videos are only going to become more prevalent.  The challenge for social media companies is how to keep up with and how to tackle misinformation effectively at scale and the challenge for us to become more adept at spotting misinformation and using the online tools available along with own judgement to sort the truth from fiction.

Trump Terminates TikTok

The Trump administration’s next high-profile target in its Chinese trade-war is the hugely popular video-sharing mobile app TikTok, which has been slapped with a 45-day ban in the U.S. from 20 September 2020.

Executive Orders

Chinese apps TikTok (from parent ByteDance) and WeChat (from parent Tencent) have received Executive Orders forbidding “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States”. The “person” in this order applies to any individual or entity e.g. government, corporation, organisation, or group. This appears to illustrate how the Trump administration would like to see American companies banning the use of TikTok on their devices.

The ban follows the Trump administration’s ban on the use of Huawei’s equipment in communications infrastructure on the grounds that Huawei was viewed to be too close to the Chinese state and, therefore, the use of its equipment could be deemed to pose a national security risk.


The White House website states that “TikTok automatically captures vast swaths of information from its users” and that “this data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information”.  The Whitehouse website also says that “TikTok also reportedly censors content that the Chinese Communist Party deems politically sensitive, such as content concerning protests in Hong Kong and China’s treatment of Uyghurs and other Muslim minorities”. 

For these reasons, and that “steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain”, the Trump administration has issued the TikTok ban.


Similarly, the White house order against the Chinese messaging, social media, and electronic payment app ‘WeChat’ and how it also allegedly captures “vast swaths of information from its users” which could then be accessed by the Chinese Communist Party.  In the case of WeChat, the White House website highlights a report of the discovery of a Chinese database that contains billions of WeChat messages sent from users in China, the United States, Taiwan, South Korea, and Australia.  The website also suggests that WeChat is a mechanism for Chinese Communist Party to keep tabs on Chinese nationals visiting the United States and “enjoying the benefits of a free society for the first time in their lives”.

TikTok and the UK

The ban in the U.S. prompted reports that the UK government was close to allowing TikTok to launch its headquarters in London, which is something that has not gone down well with Trump administration in the U.S.

What Does This Mean For Your Business?

The U.S. represents a big market for app makers and on a commercial level, the ban could be damaging to ByteDance and Tencent, owners of WeChat. Unfortunately, although the U.S. states “real” security concerns and a “national emergency” in the ITC services and supply chain as the reasons for the ban, many see this as politically motivated and as another step in the Trump Administration’s trade-war with the Chinese, which has been further stoked by accusations over the origins of the COVID-19 pandemic. Only recently, the UK’s decision not to use Huawei equipment in the 5G infrastructure was viewed by many as the UK bowing to U.S. pressure.  The future for businesses that have traditionally operated between the U.S. and China looks to be difficult and business opportunities in Chinese markets look less likely as the trade-war and the war of words escalates.

The Start of Google and Facebook Paying For News Content?

At a time when smaller news outlets have been forced to close and big tech companies dominate distribution, the Australian Government may make the likes of Google and Facebook pay for their news content.

What’s The Problem?

The pandemic has made the media landscape in Australia a very bleak one with the closing of newspapers, journalists being furloughed or made redundant, and smaller media companies struggling against big, global, digital competitors.  Australia’s big media companies, which have been lobbying for the new code, have also long felt that big tech companies such as Google and Facebook have had a free ride at their expense.

Code of Conduct

A code of conduct has, therefore, been drafted by Australia’s competition regulator, the Australian Competition and Consumer Commission, that has been designed to make more of a level playing field for publishers. The draft code, which is yet to be debated in  Australia’s parliament, proposes that news companies can negotiate payments as a bloc with tech giants such as Facebook and Google for content which appears in their news feeds and search results.

The code states that Facebook and Google must predominantly create and publish news in Australia for the Australian audience while adhering to professional editorial standards.  The Treasurer of the Australian Competition and Consumer Commission, Josh Frydenberg, will have authority to decide which companies need to comply with the code (starting with Google and Facebook) and the Australian Communications and Media Authority (Acma) will decide upon the eligibility of media companies.

Penalties For Non-Compliance

The draft code has also been given some teeth by proposing that tech companies that do not comply could face penalties of up to A$10m (£5m) per breach or 10 per cent of the company’s local turnover.

No Likes

The first indications are that Google and Facebook, the first companies that the code could apply to, have objected to the code in its current form and may consider leaving Australia’s news market if the code makes it through consultation and parliamentary debate, and into law.

What Does This Mean For Your Business?

The Australian regulator and media companies have been quick to point out that it is not a case of them trying to protect their news media from normal competition or disruption, but is a case of making a code that, if adhered to properly, could make things fairer for all. It is hoped that the code, which will doubtless be noted in other countries, is designed to encourage Google and Facebook to keep providing news services to the Australian community but to accept that they need to do so on the terms of the Australian government and that they have to negotiate fair prices for content.  Although some fear that Google could turn off its news (as happened in Spain) it would still be subject to the code because it also serves news through search results or YouTube. Although the code has obviously been met with praise by Australia’s big newspapers and websites, and a feeling that it may be good news for news consumers, Google and Facebook feel that the code, in its current form, is heavy-handed, and there is some way to go yet in negotiations before it becomes law.

Celebrity Twitter Accounts Hacked For Bitcoin

Twitter accounts of celebrities including Barack Obama and Bill Gates were hacked and used to operate a scam, asking people to donate bitcoin.

What Happened?

Hackers used the tools that were normally only available to Twitter staff to attempt to hack into the accounts of 130 high profile people including the former U.S. president.  It has been reported that hackers were able to change the passwords of 45 accounts, thereby allowing them to take over those accounts and make use of the Twitter Data download tool. This meant that the hackers could potentially have had access to the private messages, photos, videos, contacts and more for those whose accounts they hacked.

Following the hack, Twitter temporarily tried to stop verified accounts from tweeting, and approximately three hours after twitter was made aware of the attack, the social media giant reported that most accounts had been restored to full functionality.

Bitcoin Scam

To date, however, the hackers appear to have used the hacked accounts to send out vague appeals, via the hacked celebrity accounts, asking for bitcoin (cryptocurrency) donations.

It has been reported that the bitcoin account advertised by the hackers received $100,000 worth of Bitcoins through 500+ transactions and that some of this total was then transferred to other bitcoin wallets.

Social Engineering or Inside Help?

It was also said that the hack is thought to have been able to occur due to the hackers using ‘social engineering’ to manipulate and dupe a small number of Twitter staff members, and to use their credentials to get into the system.

Naturally enough, questions have been asked by some people about whether the hackers could possibly have had some inside help. For example, U.S. republican Senator Josh Hawley, recently asked the Twitter Chief Executive Jack Dorsey whether a Twitter employee may have been paid to help hack the high-profile accounts.

Twitter Sorry

Twitter has since apologised for the hack and has is expressed its embarrassment and disappointment about the incident.

What Does This Mean For Your Business?

In the U.S., this hack has meant the ringing of some serious alarm bells due to the fact that that there is a presidential election in a matter of months, the President is himself Twitter’s most prominent user, and social media companies are under great pressure to ensure that their platforms can’t be used by (for example) actors for other states, to influence the outcome of the election, bearing in mind how Facebook was used the last time.

Also, this incident is an example to businesses of how hackers can use social engineering and target particular employees to obtain credentials that can enable them to get into a company system.  This should, therefore, be a reminder to companies to alert their employees to the threat of social engineering attacks and put in place measures, procedures and policies to stop employees from being able to give out any sensitive information without proper checks and verification.

Twitter to Replace “Master”, “Slave” and “Blacklist”

Twitter has announced that it will be replacing programming language terms such as “master”, “slave” and “blacklist” with more inclusive ones.

Programming Language

Those familiar with programming language and email terms will be familiar with terms such as “blacklist” and “whitelist” with black indicating bad, and “master” and “slave” where one device or process controls other devices or processes. In the wake of George Floyd’s death at the hands of white police officers, the Black Lives Matter protests and under the #UntilWeAllBelong hashtag, Twitter has decided to opt for more inclusive language.


Twitter (Twitter Engineering) says that “This isn’t just about eng terms or code. Words matter in our meetings, our conversations, and the documents we write”.  Twitter is concerned about how the continued and accepted use of terms that are related to racism, slavery and the idea that some are masters over others are perpetuating problems in society and are a replaceable obstacle that stands in the way of a more inclusive society.  Sexism, gender issues and other areas where prejudice has created problems are also being looked at by the social media giant.  Twitter says “Inclusive language plays a critical role in fostering an environment where everyone belongs. At Twitter, the language we have been using in our code does not reflect our values as a company or represent the people we serve. We want to change that. #WordsMatter”

Which Terms Will Be Replaced?

Twitter has acknowledged that language cannot be replaced everywhere with the flip of a switch, but it is hoping to put the processes and systems in place that will allow the language changes to take place at scale.

Terms that are first on the list for replacement include:

– “Whitelist” – to be referred to as “Allowlist”.
– “Blacklist” – to be referred to as “Denylist”.
– “Master/Slave” – to be referred to as “Leader/follower, primary/replica, primary/standby”.
– Gendered pronouns (e.g. he/him/his) – to be referred to as “they, them, their”.
– “Man hours” – to be referred to as “Person hours”.


Twitter says it will be focusing on the following areas to help make the changes over time:

– Migrating source code and changing configuration.  This will involve studying Twitter’s own existing code, identifying violating terms with new warning tools, and changing to the new inclusive terminology. Twitter says that automated tools and ‘linters’ are being developed to minimize manual effort.

– Updating documentation across internal resources, Google Docs, runbooks, FAQs, readme’s, technical design docs, and more. Twitter says that it is also implementing a browser extension to help its teams identify words in documents and web pages and suggest alternative inclusive words.


Twitter is certainly not the only company that is working towards more inclusive language.  JP Morgan is opting for more inclusive language replacements in its technology policies and programming codes, as is GitHub and Google’s Chromium web browser project and the Android operating system.

What Does This Mean For Your Business?

When it comes to walking the walk, talking the talk is an important part of helping to make that happen, and many companies are now not just acknowledging the need for inclusiveness but are prepared to do something about it.  The recent boycotting of Facebook advertising by many global brands also highlights the pressure that social media companies are under to make more progress in stopping hate speech and other harmful content from being distributed via their platforms.  Tackling language is, therefore, one way in which Twitter can take another small public step in getting its own house in order.  The platform is, however, still the platform of choice for President Trump, who is unlikely to be seen by many as the voice of inclusiveness although Twitter recently made the news for fact-checking President Trump.

Featured Article – Why Big Brands Are Boycotting Social Media

Large, mainly U.S.-based companies (many of which are household names) have said that they will be boycotting Facebook and Instagram in terms of paid advertising in July over concerns about the moderation of content and hate speech on the platforms.

“Stop Hate For Profit”

The stopping of paid promotions on the platforms in July is part of the “Stop Hate For Profit” campaign, organised by the Anti-Defamation League, the National Association for the Advancement of Coloured People  (NAACP), and other organisations.

The “Stop Hate For Profit” campaign, which aims to “hit pause on hate”, says that it would like businesses to “stand in solidarity with our most deeply held American values of freedom, equality and justice and not advertise on Facebook’s services in July.”

The campaign has reported receiving an overwhelming response from businesses and organisations.  The campaign website lists the considerable number of participating businesses and organisations here (90+):

Ad Agencies Also Supporting

Unfortunately for Facebook, the campaign also has the support of agencies, some of which are reported to have even put guidelines in place for companies that want to participate.

The Main Issues

The main issues that the campaign and its participating businesses are objecting to are:

– That Facebook is allegedly appearing to take too much of a hands-off approach to the moderation of content that may be divisive, may promote what amounts to hate speech, and that Facebook is allegedly not doing enough to stop hate speech and disinformation being spread.

– In the context of the recent murder of George Floyd and the following protests, that Facebook was allegedly used by some people to spread false information about Antifa, without it being picked up by Facebook or fact-checkers.

– Even though Twitter publicly called-out some of President Trump’s tweets, in terms of fact-checking and labelling them misleading or glorifying violence, Facebook has not been seen to do so with its own platform. 

– Facebook is reported to have displayed ads from President Mr Trump’s re-election campaign featuring a red triangular symbol, which some thought was reminiscent of those symbols used by Nazis during World War II e.g. to label categories of prisoners in death camps. Facebook later removed the ads.

– Facebook appears to have sidestepped these sensitive issues at a recent media presentation.

Some of the main objections are also best put by some of the companies that are supporting the “Stop Hate For Profit” campaign. 

For example, Ben & Jerry’s announced on its website that “Ben & Jerry’s stands with our friends at the NAACP and Colour of Change, the ADL, and all those calling for Facebook to take stronger action to stop its platforms from being used to divide our nation, suppress voters, foment and fan the flames of racism and violence, and undermine our democracy.”

Also, as Magnolia Pictures announced in a Tweet, “In solidarity with the Stop Hate For Profit movement, Magnolia Pictures has chosen to stop advertising on Facebook and Instagram, starting immediately, through at least the end of July. We are seeking meaningful change at Facebook and the end to their amplification of hate speech.”

Other Big Companies

Examples of other big companies that are also supporting the campaign by boycotting advertising on Facebook in June include the following.


In a recent blog post, Starbucks said that it will be pausing advertising on all social media platforms pending the outcome of its discussions internally, with the company’s media partners and with civil rights organisations with a view to stopping the spread of hate speech.

The coffee giant also urged a wider discussion on the subject, saying “We believe more must be done to create welcoming and inclusive online communities, and we believe both business leaders and policymakers need to come together to affect real change. “

It has been reported that Starbucks will, however, continue to use YouTube and to post to social media, but not to opt for paid promotions.


Coca-Cola’s Chairman and CEO, James Quincey, took to the Media Centre on the drinks company’s website to announce and even more stringent boycott.  Mr Quincey said, “There is no place for racism in the world and there is no place for racism on social media. The Coca-Cola Company will pause paid advertising on all social media platforms globally for at least 30 days. We will take this time to reassess our advertising policies to determine whether revisions are needed. We also expect greater accountability and transparency from our social media partners”.

Progress But Insufficient

To date, the campaign has reported that although progress has been made with Facebook on these issues, the updates and policy changes made appear to be insufficient.

For example, areas where the “Stop Hate For Profit” campaign feels that Facebook is still falling short (according to the campaign’s website) include:

– Facebook appearing not to address hate more broadly in groups and posts rather than just in adverts.

– The possible spreading of voter misinformation just be before the next U.S. election.

– Where posts are shown due to their apparent newsworthiness, but still appear to promote hate.

– The need for more information about the metrics of a third-party audit Facebook regarding hate speech.

– A lack of detail about Facebook’s proposal to work with the Global Alliance for Responsible Media (GARM) and the Media Ratings Council to identify appropriate brand safety audit requirements.

Facebook Says

Some of the things that Facebook has said in response to the campaign are that it intends to start labelling potentially harmful posts that it leaves up (because of news value), that it will ban ads that describe groups, based on  e.g. race or immigration status, as a threat and will remove any content if it appears to incite violence or suppresses voting, even if the post is from a politician.

Facebook has also highlighted the fact that a recent European Commission report found that Facebook removed 86% of hate speech last year, which is up from 82.6%.

Going Forward

For many businesses, Facebook is an important platform to advertise on, and this is particularly true coming out of a pandemic where sales need a dramatic boost. At the same time, as noted by Stephan Loerke, the chief executive of the World Federation of Advertisers, this has become a societal rather than just a brand issue, and it is good to see major brands adding their own pressure to force change. 

Even though the action taken will mean that the brands aren’t totally deserting Facebook and will most likely feel, for commercial reasons, that they must begin advertising with Facebook after July, the point will have been made, and those brands will also feel that they have been true to worthwhile values, and their brands may ultimately be strengthened as a result. 

For Facebook, particularly in the light of the very poor publicity it received following the Cambridge Analytica scandal, and over how the platform was shown to have been used to spread messages reportedly by Russia in relation to the last U.S. election, this campaign is likely to be another very public blow to its reputation and brand, as well as its advertising coffers.  Although Twitter has been used by President Trump to express some very controversial views, it has received some very good publicity for publicly fact-checking the President’s tweets, perhaps as a result of threats made to the platform by President Trump.

It remains to be seen, going forward, how far Facebook will go in terms of satisfying the campaign and many of its other critics as it too tries to walk a balance between commercial, operational and societal realities and where it can clearly and comfortably stand while trying to simply keep up with the practical management of a platform that receives hundreds of millions of posts a day.

After Pressure, Zoom Offers End-To-End Encryption for Everyone

Pressure from privacy group The Electronic Frontier Foundation (EFF) and Mozilla has led to Zoom offering end-to-end encryption to all its users, not just paying customers.

A U-Turn

The decision to offer end-to-end encryption to all customers is a U-turn for Zoom. Previously, and after capitalising on the huge surge in numbers of new Zoom customers due to the pandemic, it was reported that Zoom had not wanted to offer end-to-end encryption to free users because it wanted to co-operate with U.S. law enforcement agencies e.g. the FBI and local law enforcement. Zoom’s position appears to have been based on an assumption that free-end-to-end encryption could lead to Zoom being used for illicit purposes by some users.  This is reminiscent of reports that WhatsApp was used in the UK by those responsible for the London Bridge terror attack, perhaps because the end-to-end encryption would hide their communications from the authorities.

The EFF and Mozilla

The U-turn by Zoom, in this case, appears to have been influenced by pressure from the EFF and Mozilla.

The EFF, for example, is claiming a victory after 20,000 people signed on the EFF and Mozilla’s open letter to Zoom.  The EFF argues that “best-practice privacy and security features should not be restricted to users who can afford to pay a premium.”

The EFF also makes the point that with the pandemic forcing many more organisations onto Zoom than perhaps the platform was designed for, those who cannot afford enterprise subscriptions are often the ones who need strong security and privacy protections the most.

The EFF also points out that end-to-end-encryption on the platform could help those working towards social justice, for example, those organising in “the Black-led movement against police violence”.

Must Part With Some Personal Details

Users cannot, however, expect to sign-up to a free end-to-end-encrypted Zoom without parting with some personal details. It appears that in order for Zoom to feel comfortable with what it sees as its balancing act of offering the services that people want with fighting abuse and its duty to co-operate with law enforcement agencies, Free/Basic users seeking access to Zoom’s E2EE must go through a “one-time process” of parting with additional pieces of information, such as verifying a phone number via a text message. Zoom also points out that it has a “Report a User function”.

What Does This Mean For Your Business?

For those smaller businesses and organisations using Zoom, perhaps since the beginning of the pandemic, it is good news that they now have access to strong security and privacy protection for free.

For other competing platforms, such as Slack and Microsoft Teams, the pressure is now on to follow suit in order to compete.

There is, however, a hope that the phone numbers given to Zoom for authentication in order to sign-up to the end-to-end encryption will not be disclosed to other parties and that they are used purely for authentication.