Research Indicates Zoom Is Being Targeted By Cybercriminals

With many people working from home due to coronavirus, research by Check Point indicates that cyber-criminals may be targeting the video conferencing app ‘Zoom’.

Domains

Cybersecurity company ‘Check Point’ reports witnessing a major increase in new domain registrations in the last few weeks where the domain name includes the word ‘Zoom’.  According to a recent report on Check Point’s blog, more than 1700 new domains have been registered since the beginning of the year with 25 per cent of them being registered over the past week. Check Point’s research indicates that 4 per cent of these recently registered domains have “suspicious characteristics”, such as the word ‘Zoom’.

Concern In The U.S.

The huge rise in Zoom’s user numbers, particularly in the U.S. has also led New York’s Attorney General, Letitia James, to ask Zoom whether it has reviewed its security measures recently, and to suggest to Zoom that it may have been relatively slow at addressing issues in the past.

Not Just Zoom

Check Point has warned that Zoom is not the only app that’s being targeted at the moment as new phishing websites have been launched to pass themselves off as every leading communications application.  For example, the official classroom.google.com website has been impersonated by googloclassroom.com and googieclassroom.com.

Malicious Files Too

Check Point also reports detecting malicious files with names related to the popular apps and platforms being used by remote workers during the coronavirus lockdown.  For example, malicious file names observed include zoom-us-zoom_##########.exe” and “microsoft-teams_V#mu#D_##########.exe” (# is used here to represent digits). Once these files are run, InstallCore PUA is loaded onto the victim’s computer.  InstallCore PUA is a program that can be used by cyber-criminals to install other malicious programs on a victim’s computer.

Suggestions

Some ways that users can protect their computers/devices, networks and businesses from these types of threats, as suggested by Check Point, include being extra cautious with emails and files from unfamiliar senders, not opening attachments or clicking on links in emails (phishing scams), and by paying close attention to the spelling of domains, email addresses and spelling errors in emails/on websites.  Check Point also suggests Googling the company you’re looking for to find their official website rather than just clicking on a link in an email, which could redirect to a fake (phishing) site.

What Does This Mean For Your Business?

This research highlights how cyber-criminals are always quick to capitalise on situations where people have been adversely affected by unusual events and where they know people are in unfamiliar territory.  In this case, people are also divided geographically and are trying to cope with many situations at the same time, may be a little distracted, and may be less vigilant than normal.

The message to businesses is that the evidence from security companies that are tracking the behaviour of cyber-criminals is that extra vigilance is now needed and that all employees need to be very careful, particularly in how they deal with emails from unknown sources, or from apparently known sources offering convincing reasons and incentives to click on links or download files. 

Featured Article – Microsoft Teams User Numbers Up By 12 Million In A Week

Microsoft’s collaborative working platform ‘Teams’ is reported to have seen a massive 12 million user boost in one week as a result of remote-working through the coronavirus outbreak, and through Microsoft making the platform generally available through Office 365 from March 14.

What Is Teams?

Teams, announced in November 2016 and launched by Microsoft in 2017, is a platform designed to help collaborative working and combines features such as workplace chat, meetings, notes, and attachments. Described by Microsoft as a “complete chat and online meetings solution”, it normally integrates with the company’s Office 365 subscription office productivity suite. In July 2018, Microsoft introduced a free, basic features version of Teams which did not require an Office 365 account, in order to increase user numbers and tempt users away from competitor ‘Slack’.

Microsoft Teams is also the replacement for Skype for Business Online, the support for which will end on 31 July 2021, and all-new Microsoft 365 customers have been getting Microsoft Teams by default from 1 September 2019.

March 14

Microsoft Corp. announced on March 14 that Microsoft Teams would be generally available in Office 365 for business customers in 181 markets and 19 languages.

Increased To 44 Million Users

The move to make Teams generally available to businesses with Office 365, coupled with a mass move to remote working as a result of COVID-19 has resulted in 12 million new users joining the platform in a week, bringing users up from 32 million on 11 March to 44 million users a week later.  The number is likely to have increased significantly again since 18 March.

What Does Teams Offer?

Microsoft Teams offers threaded chat capabilities which Microsoft describes as “a modern conversations experience”, and built-in Office 365 apps like Word, Excel, PowerPoint, OneNote, SharePoint and Power BI.  Also, Teams offers users ad-hoc (and scheduled) voice and video meetings and has security and compliance capabilities built-in as it supports global standards, including SOC 1, SOC 2, EU Model Clauses, ISO27001 and HIPAA. Users are also able to benefit from the fact that workspaces can be customised for each team using tabs, connectors and bots from third-party partners and Microsoft tools e.g. Microsoft Planner and Visual Studio Team Services. Microsoft says that more than 150 integrations are available or coming soon to Teams.

New Features

Microsoft reports that it has added more than 100 new features to Teams since November 2019.  These include an enhanced meeting experience (with scheduling), mobile audio calling, video calling on Android (coming soon to iOS), and email integration.  Teams has also benefited from improvements to accessibility with support for screen readers, high contrast and keyboard-only navigation.

Walkie-Talkie Phone

In January, Microsoft announced that it was adding a “push-to-talk experience” to Teams that turns employee or company-owned smartphones and tablets into walkie-talkies.  The Walkie Talkie feature, which can be accessed in private preview in the first half of this year and will be available in the Teams mobile app, offers clear, instant and secure voice communication over the cloud. 

Competition

There are, of course, other services in competition with Microsoft Teams. Slack, for example, is a cloud-based set of proprietary team collaboration tools and services.  Slack enables users (communities, groups, or teams) to join through a URL or invitation sent by a team admin or owner.  Although Slack was intended to be an organisational communication tool, it has morphed into a community platform i.e. it is a business technology that has crossed over into personal use. 

That said, Slack reported in October last year that it had 12 million daily active users, which was a 2 million increase since January 2019. 

Slack has stickiness and strong user engagement which help to attract businesses that want to get into using workstream collaboration software but, it faces challenges such as convincing big businesses that it is not just a chat app and that it is a worthy, paid-for alternative to its more well-known competitors like Microsoft’s Teams.

Like Teams, Slack has just introduced new features and has experienced a surge of growth in just over a month. 

Another competitor to Microsoft’s Teams is Zoom, which is a platform for video and audio conferencing, chat, and webinars that is often used alongside Google’s G Suite and Slack.  It has been reported that Zoom is now top of the free downloaded apps in Apple’s app store, and Learnbonds.com reports that downloads for Zoom increased by 1,270 per cent between February 22 and March 22.

Real-Life Example – Teams

A real-life example from Microsoft of how Teams is being put to good use is by bicycle and cycling gear company Trek Bicycle.  Microsoft reports how Teams has become the project hub for the company where all staff know where to find the latest documents, notes, tasks relating to team conversations thereby making Teams a central part of the company’s “get-things-done-fast culture.”

Looking Forward

Many businesses are already using and gaining advantages from the speed and scope of communication, project context, and convenience of a cloud-based, accessible hub offered by collaborative working platforms like Teams.  The decision to make Teams generally available with Office 365 for business can only make the platform more popular and the need for companies to quickly set-up effective remote working has stimulated the market for these services and given users a crash-course in and a strong reminder of their strengths and benefits. 

The hope by Microsoft and other collaborative working platform providers is that companies will go on using the platforms long after they technically need to in order to deal with COVID19 lockdown and that they will decide to use them going forward to keep improving the flexibility and productivity of their businesses, compete with other companies that are getting the best from them, and guard against excessive damage to the business from any future lockdown situations.

Facebook Video Quality Reduced To Cope With Demand

Facebook and Instagram have reduced the quality of videos shared on their platforms in Europe as demand for streaming has increased due to self-isolation.

Lower Bitrate, Looks Similar

The announcement by Facebook that a lowering of the bit-rates for videos on Facebook and Instagram in Europe highlights the need to reduce network congestion, free-up more bandwidth, and make sure that users stay connected at a time where demand is reaching very high levels because of the COVID-19 pandemic.  The move could have a significant positive impact when you consider that Facebook has around 300 million daily users in Europe alone, and streaming video can account for as much as 60% of traffic on fixed and mobile networks.

Although a reduction in bit-rates for videos will, technically, reduce the quality, the likelihood is that the change will be virtually imperceptible to most users.

Many Other Platforms

Facebook is certainly not the only platform taking this step as Amazon, Apple TV+, Disney+ and Netflix have also made similar announcements.  For example, Netflix is reducing its back video bit rates while still claiming to allow customers to get HD and Ultra HD content (with lower image quality),  and Amazon Prime Video has started to reduce its streaming bitrates as has Apple’s streaming service.

Google’s YouTube is also switching all traffic in the EU to standard definition by default.

BT Say UK Networks Have The Capacity

BT’s Chief Technology and Information Officer, Howard Watson, has announced that the UK’s advanced digital economy means that it has overbuilt its networks to compensate for HD streaming content and that the UK’s fixed broadband network core has been built with the extra ‘headroom’ to support evening peaks of network traffic that high-bandwidth applications create. Mr Watson has also pointed out that since people started to work from home more this month, there has been a weekday daytime traffic increase of 35-60 per cent compared with similar days on the fixed network, peaking at 7.5Tb/s, which is still only half the average evening peak, and far short of the 17.5 Tb/s that the network is known to be able to handle.

What Does This Mean For Your Business?

For Amazon, Apple TV, Netflix, Facebook and others platforms, they are clearly facing a challenge to their service delivery in Europe but have been quick to take a step that will at least mean that there’s enough bandwidth for their services to be delivered with the trade-off being a fall in the level of viewing quality for customers.  Many customers, however, are likely not to be too critical about the move, given the many other big changes that have been made to their lives as a result of the COVID-19 outbreak and the attempts to reduce its impact.  Netflix has even pointed out the extra benefit that its European viewers are likely to use 25 per cent less data when watching films as a result of the bit rate changes. However, with online streaming services being one of the main pleasures that many people feel they have left to enjoy safely, the change in bit rate should be OK as long as the picture quality isn’t drastically reduced to the point of annoyance and distraction.

Surge In Demand For Teleconference Apps and Platforms That Enable Home Working

The need for people to work from home during the Covid-19 outbreak is reported to have led to a huge increase in the downloads of business teleconferencing apps and in the use of popular cloud-based services like G Suite.

Surge In Downloads

Downloads of remote and collaborative working and communication apps such as Tencent Conference (https://intl.cloud.tencent.com/), WeChat Work (from China), Zoom, Microsoft Teams and Slack are reported to have risen by a massive fivefold since the beginning of the year, driven by the effects of the Covid-19 outbreak.

For example, services such as Rumii (a VR platform, normally $14.99 per month) and Spatial, which enable users to digital meetings in virtual rooms with 3D versions of their co-workers have seen a boost in the number of users, as has video communications app zoom.

Freemium Versions

Even though many of these apps have seen a surge in user numbers which could see users continuing to use them and recommending them in future if their experiences of the apps are good, the ‘freemium’ versions (the basic program for free and advanced features must be paid for) appear to account for most downloads.

Some companies, such as Rumii, have now started to offer services for free after noticing a rise in the number of downloads as Covid-19 spread in the United States.

G Suite

Google’s cloud-based G Suite service (Gmail, Docs, Drive, Hangouts, Sheets, Slides, Keep, Forms, Sites) is reported to have gone past the two billion monthly active users mark at the end of last year. It appears to have gained many active users due to people preparing to work from home following the Covid-19 outbreak.

Google has also offered parts of its enterprise service e.g. Hangouts Meet (video conferencing) for free to help businesses during the period when many employees will need to work from home.

Microsoft

Microsoft is also reported to be offering a free six-month trial for its collaborative working platform ‘Teams’, which surpassed the 20 million active user mark back in November.

Unfortunately, Microsoft Teams suffered a reported two-hour outage across Europe on Monday, just as many employees tried to log in as part of their first experience of working at home in what some commentators are now calling the new “post-office” era.

What Does This Mean For Your Business?

Cloud-based, collaborative and remote working and communications platforms are now providing a vital mitigating lifeline to many businesses and workers at the start of what is likely to be a difficult, disruptive, dangerous and stressful time.  Companies that can get the best out of these cloud-based tools, especially if they can be used effectively on a smartphone, may have a better chance of helping their businesses survive a global threat. Also, the fact that many companies and employees are forced to seek out and use cloud-based apps and platforms like these could see them continuing to make good use of them when the initial crisis is over and we could be witnessing the trigger of a longer-term change in working towards a post-office era where businesses make sure they can last out the effects of future similar threats.

Facebook Sued Down-Under For £266bn Over Cambridge Analytica Data Sharing Scandal

Six years after the personal data of 87 million users was harvested and later shared without user consent with Cambridge Analytica, Australia’s privacy watchdog is suing Facebook for an incredible £266bn over the harvested data of its citizens.

What Happened?

From March 2014 to 2015 the ‘This Is Your Digital Life’ app, created by British academic, Aleksander Kogan and downloaded by 270,000 people which then provided access to their own and their friends’ personal data too, was able to harvest data from Facebook.

The harvested data was then shared with (sold to) data analytics company Cambridge Analytica, in order to build a software program that could predict and use personalised political adverts (political profiling) to influence choices at the ballot box in the last U.S. election, and for the Leave campaign in the UK Brexit referendum.

Australia

The lawsuit, brought by the Australian Information Commissioner against Facebook Inc alleges that, through the app, the personal and sensitive information of 311,127 Australian Facebook Users (Affected Australian Individuals) was disclosed and their privacy was interfered with.  Also, the lawsuit alleges that Facebook did not adequately inform those Australians of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.  Furthermore, the lawsuit alleges that Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure.

In the lawsuit, the Australian Information Commissioner, therefore, alleges that the Australian Privacy Principle (APP) 6 has been breached (disclosing personal information for a purpose other than that for which it was collected), as has APP 11 (failing to take reasonable steps to protect the personal information from unauthorised disclosure).  Also, the Australian Information Commissioner alleges that these breaches are in contravention of section 13G of the Privacy Act 1988.

£266 Billion!

The massive potential fine of £266 billion has been arrived at by multiplying the maximum of $1,700,000 (£870,000) for each contravention of the Privacy Act by the 311,127 Australian Facebook Users (Affected Australian Individuals).

What Does This Mean For Your Business?

Back in July 2018, 16 months after the UK Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the UK’s ICO announced that Facebook would be fined £500,000 for data breaches.  This Australian lawsuit, should it not go Facebook’s way, represents another in a series of such lawsuits over the same scandal, but the £266 billion figure would be a massive hit and would, for example, totally dwarf the biggest settlement to date against Facebook of $5 billion to the US Federal Trade Commission over privacy matters.  To put it in even greater perspective, an eye-watering potential fine of £266 billion would make the biggest GDPR fine to date of £183 million to British Airways look insignificant. 

Clearly, this is another very serious case for Facebook to focus its attention on, but the whole matter highlights just how important data security and privacy matters are now taken and how they have been included in different national laws with very serious penalties for non-compliance attached. Facebook has tried hard since the scandal to introduce and publicise many new features and aspects of its service that could help to regain the trust of users in both its platform’s safeguarding of their details and in the area of stopping fake news from being distributed via its platform.  This announcement by the Australian Information Commissioner is, therefore, likely to be an extremely painful reminder of a regrettable and period in the tech giant’s history, not to mention it being a potential threat to Facebook.

For those whose data may have been disclosed, shared and used in a way that contravened Australia’s laws, they may be pleased that their country is taking such a strong stance in protecting their interests and this may send a very powerful message to other companies that store and manage the data of Australian citizens.

Featured Article – Combatting Fake News

The spread of misinformation/disinformation/fake news by a variety of media including digital and printed stories and deepfake videos is a growing threat in what has been described as out ‘post-truth era’, and many people, organisations and governments are looking for effective ways to weed out fake news, and to help people to make informed judgements about what they hear and see.

The exposure of fake news and its part in recent election scandals, the common and frequent use of the term by prominent figures and publishers, and the need for the use of fact-checking services have all contributed to an erosion of public trust in the news they consume. For example, YouGov research used to produce annual Digital News Report (2019) from the Reuters Institute for the Study of Journalism at the University of Oxford showed that public concern about misinformation remains extremely high, reaching a 55 per cent average across 38 countries with less than half (49 per cent) of people trusting the news media they use themselves.

The spread of fake news online, particularly at election times, is of real concern and with the UK election just passed, the UK Brexit referendum, the 2017 UK general election, and the last U.S. presidential election all being found to have suffered interference in the form of so-called ‘fake news’ (and with the 59th US presidential election scheduled for Tuesday, November 3, 2020) the subject is high on the world agenda. 

Challenges

Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:

– There are people (and state-sponsored actors) worldwide who are making it harder for people to know what to believe e.g. through spreading fake news and misinformation, and distorting stories).

– Many people don’t trust the media or don’t trust fact-checkers.

– Simply presenting facts doesn’t change peoples’ minds.

– People prefer/find it easier to accept stories that reinforce their existing beliefs.

Also, some research (Stanford’s Graduate School of Education) has shown that young people may be more susceptible to seeing and believing fake news.

Combatting Fake News

So, who’s doing what online to meet these challenges and combat the fake news problem?  Here are some examples of those organisations and services leading the fightback, and what methods they are using.

Browser-Based Tools

Recent YouGov research showed that 26% per cent of people say they have started relying on more ‘reputable’ sources of news, but as well as simply choosing what they regard to be trustworthy sources, people can now choose to use services which give them shorthand information on which to make judgements about the reliability of news and its sources.

Since people consume online news via a browser, browser extensions (and app-based services) have become more popular.  These include:

– Our.News.  This service uses a combination of objective facts (about an article) with subjective views that incorporate user ratings to create labels (like nutrition labels on food) next to new articles that a reader can use to make a judgement.  Our.News labels use publisher descriptions from Freedom Forum, bias ratings from AllSides, information about an article’s sources author and editor.  It also uses fact-checking information from sources including PolitiFact, Snopes and FactCheck.org, and labels such as “clickbait” or “satire” along with and user ratings and reviews.  The Our.News browser extension is available for Firefox and Chrome, and there is an iOS app. For more information go to https://our.news/.

– NewsGuard. This service, for personal use or for NewsGuard’s library and school system partners, offers a reliability rating score of 0-100 for each site based on its performance on nine key criteria, ratings icons (green-red ratings) next to links on all of the top search engines, social media platforms, and news aggregation websites.  Also, NewsGuard gives summaries showing who owns each site, its political leaning (if any), as well as warnings about hoaxes, political propaganda, conspiracy theories, advertising influences and more.  For more information, go to https://www.newsguardtech.com/.

Platforms

Another approach to combatting fake news is to create a news platform that collects and publishes news that has been checked and is given a clear visual rating for users of that platform.

One such example is Credder, a news review platform which allows journalists and the public to review articles, and to create credibility ratings for every article, author, and outlet.  Credder focuses on credibility, not clicks, and uses a Gold Cheese (yellow) symbol next to articles, authors, and outlets with a rating of 60% or higher, and a Mouldy Cheese (green) symbol next to articles, authors, and outlets with a rating of 59% or less. Readers can, therefore, make a quick choice about what they choose to read based on these symbols and the trust-value that they create.

Credder also displays a ‘Leaderboard’ which is based on rankings determined by the credibility and quantity of reviewed articles. Currently, Credder ranks nationalgeographic.com, gizmodo.com and cjr.org as top sources with 100% ratings.  For more information see https://credder.com/.

Automation and AI

Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Government

Governments clearly have an important role to play in the combatting of fake news, especially since fake news/misinformation has been shown to have been spread via different channels e.g. social media to influence aspects of democracy and electoral decision making.

For example, in February 2019, the Digital, Culture, Media and Sport Committee published a report on disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government called for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Also, in the US, Facebook’s Mark Zuckerberg has been made to appear before the U.S. Congress to discuss how Facebook tackles false reports.

Finland – Tackling Fake News Early

One example of a government taking a different approach to tackling fake news is that of Finland, a country that has recently been rated Europe’s most resistant nation to fake news.  In Finland, evaluation of news and fact-checking behaviour in the school curriculum was introduced in a government strategy after 2014, when Finland was targeted with fake news stories from its Russian neighbour.  The changes to the school curriculum across core areas in all subjects are, therefore, designed to make Finnish people, from a very young age, able to detect and do their part to fight false information.

Social Media

The use of Facebook to spread fake news that is likely to have influenced voters in the UK Brexit referendum, the 2017 UK general election and the last U.S. presidential election put social media and its responsibilities very much in the spotlight.  Also, the Cambridge Analytica scandal and the illegal harvesting of 50 million Facebook profiles in early 2014 for apparent electoral profiling purposes damaged trust in the social media giant.

Since then, Facebook has tried to be seen to be actively tackling the spread of fake news via its platform.  Its efforts include:

– Hiring the London-based, registered charity ‘Full Fact’, who review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.  Facebook is also reported to be working with fact-checkers in more than 20 countries, and to have had a working relationship with Full Fact since 2016.

– In October 2018, Facebook also announced that a new rule for the UK now means that anyone who wishes to place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

– In October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– In January this year, Monika Bickert, Vice President of Facebook’s Global Policy Management announced that Facebook is banning deepfakes and “all types of manipulated media”.

Other Platforms & Political Adverts

Political advertising has become mixed up with the spread of misinformation in the public perception in recent times.  With this in mind, some of the big tech and social media players have been very public about making new rules for political advertising.

For example, in November 2019, Twitter Inc banned political ads, including ads referencing a political candidate, party, election or legislation.  Also, at the end of 2019, Google took a stand against political advertising by saying that it would limit audience targeting for election adverts to age, gender and the general location at a postal code level.

Going Forward

With a U.S. election this year, and with the sheer number of sources, and with the scale and resources that some (state-sponsored) actors have, the spread of fake news is something that is likely to remain a serious problem for some time yet.  From the Finnish example of creating citizens who have a better chance than most of spotting fake news to browser-based extensions, moderated news platforms, the use of AI, government and other scrutiny and interventions, we are all now aware of the problem, the fight-back is underway, and we are getting more access to ways in which we can make our own more informed decisions about what we read and watch and how credible and genuine it is.

WhatsApp Ceases Support For More Old Phone Operating Systems

WhatsApp has announced that its messaging app will no longer work on outdated operating systems, which is a change that could affect millions of smartphone users.

Android versions 2.3.7 and Older, iOS 8 and Older

The change, which took place on February 1, means that WhatsApp has ended support for Android operating system versions 2.3.7 and older and iOS 8 meaning that users of WhatsApp who have those operating systems on their smartphones will no longer be able to create new accounts or to re-verify existing accounts.  Although these users will still be able to use WhatsApp on their phones, WhatsApp has warned that because it has no plans to continue developing for the old operating systems, some features may stop functioning at any time.

Why?

The change is consistent with Facebook-owned app’s strategy of withdrawing support for older systems and older devices as it did back in 2016 (smartphones running older versions of Android, iOS, Windows Phone + devices running Android 2.2 Froyo, Windows Phone 7 and older versions, and iOS 6 and older versions), and when WhatsApp withdrew support for Windows phones on 31 December 2019.

For several years now, WhatsApp has made no secret of wanting to maintain the integrity of its end-to-end encrypted messaging service, making changes that will ensure that new features can be added that will keep the service competitive, maintain feature parity across different systems and devices, and focus on the operating systems that it believes that the majority of its customers in its main markets now use.

Security & Privacy?

This also means that, since there will no longer be updates for older operating systems, this could lead to privacy and security risks for those who continue using older operating systems.

What Now?

Users who have a smartphone with an older operating system can update the operating system, or upgrade to a newer smartphone with model in order to ensure that they can continue using WhatsApp.

The WhatsApp messaging service can also now be accessed through the desktop by syncing with a user’s phone.

What Does This Mean For Your Business?

WhatsApp is used by many businesses for general communication and chat, groups and sending pictures, and for those business users who still have an older smartphone operating system, this change may be another reminder that the perhaps overdue time to upgrade is at hand.  Some critics, however, have pointed to the fact that the move may have more of a negative effect on those WhatsApp users in growth markets e.g. Asia and Africa where many older devices and operating systems are still in use.

For WhatsApp, this move is a way to stay current and competitive in its core markets and to ensure that it can give itself the scope to offer new features that will keep users loyal and engaged with and committed to the app.

Business Leaders Lack Vital Digital Skills Says OU Survey

The Open University’s new ‘Leading in a Digital Age’ report highlights a link between improved business performance and leaders who are equipped, through technology training, to manage digital change.

Investing In Digital Skills Training

The latest version of the annual report, which bases its findings on a survey of 950 CTOs and senior leaders within UK organisations concludes that leaders who invested in digital skills training are experiencing improved productivity (56 per cent), greater employee engagement (55 per cent), enhanced agility, and vitally, increased profit.

The flipside, highlighted in the same survey, is that almost half (47 per cent) of those business leaders surveyed thought they lacked the tech skills to manage in the digital age, and more than three-quarters of them acknowledge that they could benefit from more digital training. 

Key Point

The key point revealed by the OU survey and report is that the development of digital skills in businesses are led from the top and that those businesses that invest in learning and development of digital skills are likely to be more able to take advantage of opportunities in what could now be described as a ‘digital age’.

Skills Shortages

The report acknowledges the digital skills shortages that UK businesses and organisations face (63 per cent of senior business leaders report a skills shortage for their organisation) and the report identifies a regional divide in those companies reporting skills shortages – more employers in the South and particularly the South West are finding that skills are in short supply and reporting that recruitment for digital roles takes longer.

One likely contributing factor to some geographical/regional divides in skills shortages and difficulty in recruiting for tech roles in those areas may be the spending, per area, on addressing those skills shortages.  For example, London is reported to have spent (in 2019) £1.4 billion (the equivalent of £30,470 per organisation), while the North East spent the least (£172.2 million), and South East spent only £10,260 per organisation.

Factors Affecting The Skills Shortage

The OU report identifies several key factors that appear to be affecting the skills shortage and the investment that may be needed to address those skills shortages. These include the uncertainty over Brexit, increased competition, an ageing population, the speed and scope of the current ‘digital revolution’, and a lack of diversity.

What Does This Mean For Your Business?

Bearing in mind that the OU, whose survey and report this was, is a supplier of skills training, the report, nonetheless, makes some relevant and important points.  For many businesses, for example, managers and owners are most likely to the be the ones with the most integrated picture of the business and its aims, and if they had better digital skills and awareness they may be more likely to identify opportunities, and more likely to promote and invest in digital skills training within their organisation that could be integral to their organisation being able to take advantage of those opportunities.

The tech skills shortage in the UK is, unfortunately, not new and is not down to just businesses alone to solve the skills gap challenge. The government, the education system and businesses need to find ways to work together to develop a base of digital skills in the UK population and to make sure that the whole tech ecosystem finds effective ways to address the skills gap and keep the UK’s tech industries and business attractive and competitive.  As highlighted in the OU report, apprenticeships may be one more integrated way to help bridge skills shortages.

Tech Tip – Using WhatsApp On Your PC

If you’re working at your PC and you need to access WhatsApp without having to keep looking at your phone, there’s an easy way to use WhatsApp on your PC – here’s how:

– Open web.whatsapp.com in a browser.

– Open WhatsApp on your phone.

– Open the Chats screen, select ‘Menu’, and select ‘WhatsApp Web’.

– Scan the QR code with your phone.

– You will now be able to see your WhatsApp chats on your PC every time you open web.whatsapp.com in a browser.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.