Survey Reveals IR35 Tax Reforms Legal Action Risk For Private Sector Companies

A survey by ContractorCalculator has revealed that many private sector companies may be at risk of legal action through misinterpreting the new IR35 tax reforms.

What Is IR35?

The IR35 tax reform legislation, set to be introduced this April, is designed to stop tax avoidance from ‘disguised employment’, which occurs when self-employed contractors set up their own limited company to pay themselves through dividends (which are not subject to National Insurance).  IR35 will essentially mean that, from April 2020, medium-to-larger private sector organisations become responsible for determining whether the non-permanent contractors and freelancers should be taxed in the same way as permanent employees (inside IR35) or as off-payroll workers (outside IR35), based upon the work they do and how it is performed.

Also, the tax liability will transfer from the contractor to the fee-paying party i.e. the recruiter or the company that directly engages the contractor. HMRC hopes that the IR35 reforms will stop contractors from deliberately misclassifying themselves in order to reduce their employment tax liabilities.

The idea for the introduction of the legislation dates back to 1999 with Chancellor Gordon Brown and Chancellor Philip Hammond introduced IR35 for public bodies using contractors from April 2017.

National Insurance

One of the potential problem areas for private sector companies revealed by the ContractorCalculator questionnaire, answered by some 12,000 contractors, is that some may be unlawfully deducting employers’ national insurance contributions (NICs) from their contractors’ pay.  This means that they are effectively imposing double taxation on these contractors.

Given that 42% of contractors said they weren’t aware that such deductions were unlawful, the survey appears to show that although these companies have been acting unlawfully, it is likely to be because they have simply misinterpreted the new tax reforms given the complicated nature of the IR35.

Tribunal Threat

The survey also showed that 58% of survey participants are classified as ‘inside’ IR35 (taxed in the same way as permanent employees) said that they would consider taking their client to an employment tribunal because, if they have to pay the same amount of tax as a permanent employee, they feel that they should receive the same benefits as permanent employees e.g. sick pay and a pension.

Contractor Loses Case

On this subject, there was news this week that an IT contractor who had worked through his limited company Northern Light Solutions for Nationwide for several years and been treated as outside IR35 has lost an appeal to HMRC against a £70,000 tax demand whereby HMRC had argued, successfully, that he should have been categorised as inside IR35.

What Does This Mean For Your Business?

When the IR35 tax reforms were first announced, many business owners thought that the reforms appeared to be very complex and that not enough had been done by the government to raise awareness of the changes and to educate businesses and contractors about the implications and responsibilities.  This survey appears to support this and shows that this lack of knowledge and awareness of IR35 by businesses could be leaving them open to the risk of legal action.  Contactors and the companies that use their services need to learn quickly about the dangers of hiring freelance workers long-term and companies that use freelancers need to conduct correct due diligence in order to ensure that the business relationship they have with them complies with IR35.

Facebook Sued Down-Under For £266bn Over Cambridge Analytica Data Sharing Scandal

Six years after the personal data of 87 million users was harvested and later shared without user consent with Cambridge Analytica, Australia’s privacy watchdog is suing Facebook for an incredible £266bn over the harvested data of its citizens.

What Happened?

From March 2014 to 2015 the ‘This Is Your Digital Life’ app, created by British academic, Aleksander Kogan and downloaded by 270,000 people which then provided access to their own and their friends’ personal data too, was able to harvest data from Facebook.

The harvested data was then shared with (sold to) data analytics company Cambridge Analytica, in order to build a software program that could predict and use personalised political adverts (political profiling) to influence choices at the ballot box in the last U.S. election, and for the Leave campaign in the UK Brexit referendum.

Australia

The lawsuit, brought by the Australian Information Commissioner against Facebook Inc alleges that, through the app, the personal and sensitive information of 311,127 Australian Facebook Users (Affected Australian Individuals) was disclosed and their privacy was interfered with.  Also, the lawsuit alleges that Facebook did not adequately inform those Australians of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.  Furthermore, the lawsuit alleges that Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure.

In the lawsuit, the Australian Information Commissioner, therefore, alleges that the Australian Privacy Principle (APP) 6 has been breached (disclosing personal information for a purpose other than that for which it was collected), as has APP 11 (failing to take reasonable steps to protect the personal information from unauthorised disclosure).  Also, the Australian Information Commissioner alleges that these breaches are in contravention of section 13G of the Privacy Act 1988.

£266 Billion!

The massive potential fine of £266 billion has been arrived at by multiplying the maximum of $1,700,000 (£870,000) for each contravention of the Privacy Act by the 311,127 Australian Facebook Users (Affected Australian Individuals).

What Does This Mean For Your Business?

Back in July 2018, 16 months after the UK Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the UK’s ICO announced that Facebook would be fined £500,000 for data breaches.  This Australian lawsuit, should it not go Facebook’s way, represents another in a series of such lawsuits over the same scandal, but the £266 billion figure would be a massive hit and would, for example, totally dwarf the biggest settlement to date against Facebook of $5 billion to the US Federal Trade Commission over privacy matters.  To put it in even greater perspective, an eye-watering potential fine of £266 billion would make the biggest GDPR fine to date of £183 million to British Airways look insignificant. 

Clearly, this is another very serious case for Facebook to focus its attention on, but the whole matter highlights just how important data security and privacy matters are now taken and how they have been included in different national laws with very serious penalties for non-compliance attached. Facebook has tried hard since the scandal to introduce and publicise many new features and aspects of its service that could help to regain the trust of users in both its platform’s safeguarding of their details and in the area of stopping fake news from being distributed via its platform.  This announcement by the Australian Information Commissioner is, therefore, likely to be an extremely painful reminder of a regrettable and period in the tech giant’s history, not to mention it being a potential threat to Facebook.

For those whose data may have been disclosed, shared and used in a way that contravened Australia’s laws, they may be pleased that their country is taking such a strong stance in protecting their interests and this may send a very powerful message to other companies that store and manage the data of Australian citizens.

Dentist’s Legal Challenges To Anonymity of Negative Google Reviewer

ABC News in Australia has reported how a Melbourne dentist has convinced a Federal Court Judge to order tech giant Google to produce identifying information about a person who posted a damaging negative review about the dentist on Google’s platform.

What Happened?

The dentist, Dr Matthew Kabbabe, alleges that a reviewer’s comment posted on Google approximately three months ago advised others to “stay away” from his practice and that it damaged his teeth-whitening business and had a knock-on negative impact on his life.

Even though Google provides a platform to allow reviews to be posted in order to benefit businesses (if reviews are good), perhaps encourage and guide businesses to give good service, and to help Google users to decide whether to use a service, the comment was the only bad one on a page of five-star reviews. In addition to the possibly defamatory nature of the comment, Dr Kabbabe’s objection to the anonymity that Google offers comment posters, and that it could, as such be, something posted by a competitor or disgruntled ex-employee to damage his (or any other business) drove him to take the matter to the Federal Court after, it has been reported, his requests to Google to take the comment down were unsuccessful.

Landmark Ruling

Not only did Federal Court Judge Justice Bernard Murphy request that Google divulge identifying information about the comment poster, listed only a “CBsm 23″ (name, phone number, IP addresses, location metadata), but also the tech giant has been ordered to provide any other Google accounts (name and email addresses)  which are from the same IP address during the period of time in question.

Can Reply

Reviews posted on Google can be replied to by businesses as long as the replies comply with Google’s guidelines.

Dealing with some apparently unfair customer comments online is becoming more common for many businesses.  For example, hotels and restaurants have long struggled with how to respond to potentially damaging criticism left by customers on TripAdvisor. Recently, the owner of the Oriel Daniel Tearoom in Llangefni, Anglesey made the news when they responded to negative comments with brutal responses and threats of lifetime bans.

What Does This Mean For Your Business?

For the most part, potential customers are likely to be able to take a balanced view of comments that they read when finding out more about a business, but the fact that a Federal judge ruled in favour of not allowing those who have posted potentially damaging comments to hide behind online anonymity means that there may well be an argument for platforms to amend rules to try to redress the balance more in the favour of businesses.  It does seem unfair that, as in the case of the dentist, where the overwhelming majority of comments have been good, an individual, who may be a competitor or person with an axe to grind is allowed to anonymously and publicly publish damaging comments, whether justified or not, for a global audience to see and with no need to prove their allegations – something that would be subject to legal scrutiny in the offline world.  It will be interesting to see Google’s response to this ground-breaking ruling.

Google In Talks About Paying Publishers For News Content

It has been reported that Google is in talks with publishers with a view to buying in premium news content for its own news services to improve its relationship with EU publishers, and to combat fake news.

Expanding The Google News Initiative

Reports from the U.S. Wall Street Journal indicate that Google is in preliminary talks with publishers outside the U.S. in order expand its News Initiative (https://newsinitiative.withgoogle.com/), the program where Google works with journalists, news organisations, non-profits and entrepreneurs to ensure that fake news is effectively filtered out of current stories in the ‘digital age’.  Examples of big-name ‘partners’ that Google has worked with as part of the initiative include the New York Times, The Washington Post, The Guardian and fact-checking organisations like the International Fact-Checking Network and CrossCheck (to fact-check the French Election).

As well as partnerships, the Google News Initiative provides a number of products for news publishing e.g. Subscribe With Google, News on Google, Fact Check tags and AMP stories (tap-operated, full-screen content).

This Could Please Publishers

The move by Google to pay for content should please publishers, some of whom have been critical of Google and other big tech players for hosting articles on their platforms that attract readers and advertising money, but not paying to display them. Google has faced particular criticism in France at the end of last year after the country introduced a European directive that should have made tech giants pay for news content but in practice simply led to Google removing the snippet below links to French news sites, and removing the thumbnail images that often appear next to news results.  

Back in 2014 for example, Google closed its Spanish news site after it was required to pay “link tax” licensing fees to Spanish news sites and back in November 2018 Google would not rule out shutting down Google News in other EU countries if a “link tax” was adopted by them. 

Competitors

Google is also in competition with other tech giants who now provide their own fact-checked and moderated news services.  For example, back in October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources.

What Does This Mean For Your Business?

For European countries and European publishers, it is likely to be good news that Google is possibly coming to the table to offer some money for the news content that it displays on its platform, and that it may be looking for a way to talk about and work through some of the areas of contention.

For Google, this is an opportunity for some good PR in an area where it has faced criticism in Europe, an opportunity to improve its relationship with publishers in Europe, plus a chance to add value to its news service and to help Google to compete with other tech giants that also offer news services with the fake news weeded out.

Featured Article – Combatting Fake News

The spread of misinformation/disinformation/fake news by a variety of media including digital and printed stories and deepfake videos is a growing threat in what has been described as out ‘post-truth era’, and many people, organisations and governments are looking for effective ways to weed out fake news, and to help people to make informed judgements about what they hear and see.

The exposure of fake news and its part in recent election scandals, the common and frequent use of the term by prominent figures and publishers, and the need for the use of fact-checking services have all contributed to an erosion of public trust in the news they consume. For example, YouGov research used to produce annual Digital News Report (2019) from the Reuters Institute for the Study of Journalism at the University of Oxford showed that public concern about misinformation remains extremely high, reaching a 55 per cent average across 38 countries with less than half (49 per cent) of people trusting the news media they use themselves.

The spread of fake news online, particularly at election times, is of real concern and with the UK election just passed, the UK Brexit referendum, the 2017 UK general election, and the last U.S. presidential election all being found to have suffered interference in the form of so-called ‘fake news’ (and with the 59th US presidential election scheduled for Tuesday, November 3, 2020) the subject is high on the world agenda. 

Challenges

Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:

– There are people (and state-sponsored actors) worldwide who are making it harder for people to know what to believe e.g. through spreading fake news and misinformation, and distorting stories).

– Many people don’t trust the media or don’t trust fact-checkers.

– Simply presenting facts doesn’t change peoples’ minds.

– People prefer/find it easier to accept stories that reinforce their existing beliefs.

Also, some research (Stanford’s Graduate School of Education) has shown that young people may be more susceptible to seeing and believing fake news.

Combatting Fake News

So, who’s doing what online to meet these challenges and combat the fake news problem?  Here are some examples of those organisations and services leading the fightback, and what methods they are using.

Browser-Based Tools

Recent YouGov research showed that 26% per cent of people say they have started relying on more ‘reputable’ sources of news, but as well as simply choosing what they regard to be trustworthy sources, people can now choose to use services which give them shorthand information on which to make judgements about the reliability of news and its sources.

Since people consume online news via a browser, browser extensions (and app-based services) have become more popular.  These include:

– Our.News.  This service uses a combination of objective facts (about an article) with subjective views that incorporate user ratings to create labels (like nutrition labels on food) next to new articles that a reader can use to make a judgement.  Our.News labels use publisher descriptions from Freedom Forum, bias ratings from AllSides, information about an article’s sources author and editor.  It also uses fact-checking information from sources including PolitiFact, Snopes and FactCheck.org, and labels such as “clickbait” or “satire” along with and user ratings and reviews.  The Our.News browser extension is available for Firefox and Chrome, and there is an iOS app. For more information go to https://our.news/.

– NewsGuard. This service, for personal use or for NewsGuard’s library and school system partners, offers a reliability rating score of 0-100 for each site based on its performance on nine key criteria, ratings icons (green-red ratings) next to links on all of the top search engines, social media platforms, and news aggregation websites.  Also, NewsGuard gives summaries showing who owns each site, its political leaning (if any), as well as warnings about hoaxes, political propaganda, conspiracy theories, advertising influences and more.  For more information, go to https://www.newsguardtech.com/.

Platforms

Another approach to combatting fake news is to create a news platform that collects and publishes news that has been checked and is given a clear visual rating for users of that platform.

One such example is Credder, a news review platform which allows journalists and the public to review articles, and to create credibility ratings for every article, author, and outlet.  Credder focuses on credibility, not clicks, and uses a Gold Cheese (yellow) symbol next to articles, authors, and outlets with a rating of 60% or higher, and a Mouldy Cheese (green) symbol next to articles, authors, and outlets with a rating of 59% or less. Readers can, therefore, make a quick choice about what they choose to read based on these symbols and the trust-value that they create.

Credder also displays a ‘Leaderboard’ which is based on rankings determined by the credibility and quantity of reviewed articles. Currently, Credder ranks nationalgeographic.com, gizmodo.com and cjr.org as top sources with 100% ratings.  For more information see https://credder.com/.

Automation and AI

Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Government

Governments clearly have an important role to play in the combatting of fake news, especially since fake news/misinformation has been shown to have been spread via different channels e.g. social media to influence aspects of democracy and electoral decision making.

For example, in February 2019, the Digital, Culture, Media and Sport Committee published a report on disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government called for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Also, in the US, Facebook’s Mark Zuckerberg has been made to appear before the U.S. Congress to discuss how Facebook tackles false reports.

Finland – Tackling Fake News Early

One example of a government taking a different approach to tackling fake news is that of Finland, a country that has recently been rated Europe’s most resistant nation to fake news.  In Finland, evaluation of news and fact-checking behaviour in the school curriculum was introduced in a government strategy after 2014, when Finland was targeted with fake news stories from its Russian neighbour.  The changes to the school curriculum across core areas in all subjects are, therefore, designed to make Finnish people, from a very young age, able to detect and do their part to fight false information.

Social Media

The use of Facebook to spread fake news that is likely to have influenced voters in the UK Brexit referendum, the 2017 UK general election and the last U.S. presidential election put social media and its responsibilities very much in the spotlight.  Also, the Cambridge Analytica scandal and the illegal harvesting of 50 million Facebook profiles in early 2014 for apparent electoral profiling purposes damaged trust in the social media giant.

Since then, Facebook has tried to be seen to be actively tackling the spread of fake news via its platform.  Its efforts include:

– Hiring the London-based, registered charity ‘Full Fact’, who review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.  Facebook is also reported to be working with fact-checkers in more than 20 countries, and to have had a working relationship with Full Fact since 2016.

– In October 2018, Facebook also announced that a new rule for the UK now means that anyone who wishes to place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

– In October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– In January this year, Monika Bickert, Vice President of Facebook’s Global Policy Management announced that Facebook is banning deepfakes and “all types of manipulated media”.

Other Platforms & Political Adverts

Political advertising has become mixed up with the spread of misinformation in the public perception in recent times.  With this in mind, some of the big tech and social media players have been very public about making new rules for political advertising.

For example, in November 2019, Twitter Inc banned political ads, including ads referencing a political candidate, party, election or legislation.  Also, at the end of 2019, Google took a stand against political advertising by saying that it would limit audience targeting for election adverts to age, gender and the general location at a postal code level.

Going Forward

With a U.S. election this year, and with the sheer number of sources, and with the scale and resources that some (state-sponsored) actors have, the spread of fake news is something that is likely to remain a serious problem for some time yet.  From the Finnish example of creating citizens who have a better chance than most of spotting fake news to browser-based extensions, moderated news platforms, the use of AI, government and other scrutiny and interventions, we are all now aware of the problem, the fight-back is underway, and we are getting more access to ways in which we can make our own more informed decisions about what we read and watch and how credible and genuine it is.

Featured Article – Proposed New UK Law To Cover IoT Security

The UK government’s Department for Digital, Culture, Media and Sport (DCMS), has announced that it will soon be preparing new legislation to enforce new standards that will protect users of IoT devices from known hacking and spying risks.

IoT Household Gadgets

This commitment to legislate leads on from last year’s proposal by then Digital Minister Margot James and follows a seven-month consultation with GCHQ’s National Cyber Security Centre, and with stakeholders including manufacturers, retailers, and academics. 

The proposed new legislation will improve digital protection for users of a growing number of smart household devices (devices with an Internet connection) that are broadly grouped together as the ‘Internet of Things’ (IoT).  These gadgets, of which there is an estimated 14 billion+ worldwide (Gartner), include kitchen appliances and gadgets, connected TVs, smart speakers, home security cameras, baby monitors and more.

In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

What Are The Risks?

The risks are that the Internet connection in IoT devices can, if adequate security measures are not in place, provide a way in for hackers to steal personal data, spy on users in their own homes, or remotely take control of devices in order to misuse them.

Default Passwords and Link To Major Utilities

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cyber-criminals, the IoT devices are wide open to being tampered with and misused.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

Examples

Real-life examples of the kind of IoT hacking that the new legislation will seek to prevent include:

– Hackers talking to a young girl in her bedroom via a ‘Ring’ home security camera (Mississippi, December 2019).  In the same month, a Florida family were subjected to vocal, racial abuse in their own home and subjected to a loud alarm blast after a hacker took over their ‘Ring’ security system without permission.

– In May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee.

– Back in 2017, researchers discovered that a sex toy with an in-built camera could also be hacked.

– In October 2016, the ‘Mirai’ attack used thousands of household IoT devices as a botnet to launch an online distributed denial of service (DDoS) attack (on the DNS service ‘Dyn’) with global consequences.

New Legislation

The proposed new legislation will be intended to put pressure on manufacturers to ensure that:

– All internet-enabled devices have a unique password and not a default one.

– There is a public point of contact for the reporting of any vulnerabilities in IoT products.

– The minimum length of time that a device will receive security updates is clearly stated.

Challenges

Even though legislation could make manufacturers try harder to make IoT devices more secure, technical experts and commentators have pointed out that there are many challenges to making internet-enabled/smart devices secure because:

  • Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down-sell now and worry about security later.
  • Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.
  • With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such, these devices are likely to remain available to be used by cyber-criminals for a long time.

Looking Ahead

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having to display a label, by law, that indicates how safe the item is, could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater if faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as was originally planned for the proposed legislation.

The hope from cyber-security experts and commentators is that the proposed new legislation won’t be watered down before it becomes law.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

Avast Anti-Virus Is To Close Subsidiary Jumpshot After Browsing Data Selling Privacy Concerns

Avast, the Anti-virus company, has announced that it will not be providing any more data to, and will be commencing “a wind down” of its subsidiary Jumpshot Inc after a report that it was selling supposedly anonymised data to advertiser third parties that could be linked to individuals.

Jumpshot Inc.

Jumpshot Inc, founded in 2010, purchased by Avast in 2013, and operated as a data company since 2015 essentially organises and sells packaged data, that has been gathered from Avast, to enterprise clients and marketers as marketing intelligence.

Avast anti-virus incorporates a plugin that has, until now, enabled subsidiary Junpshot to scrape/gain access to that data which Jumpshot could sell to (mainly bigger) third party buyers so that they can learn what consumers are buying and where thereby helping with targeting their advertising.

Avast is reported to have access to data from 100 million devices, including PCs and phones.

Investigation Findings

The reason why Avast has, very quickly, decided to ‘wind down’ i.e. close Jumpshot is that the report of an investigation by Motherboard and PCMag revealed that Avast appeared to be harvesting users’ browser histories with the promise (to those who opted-in to data sharing) that the data would be ‘de-identified,’ ( to protect user privacy), whereas what actually appeared to be happening was that the data, which was being sold to third parties, could be linked back to people’s real identities, thereby potentially exposing every click and search they made.

When De-Identification Fails

As reported by PCMag, the inclusion of timestamp information and persistent device IDs with the collected URLs of user clicks, in this case, could, in fact, be analysed to expose someone’s identity.  This could, in theory, mean that the data taken from Avast and supplied via subsidiary Jumpshot to third parties may not be de-identified, and could, therefore, pose a privacy risk to those Avast users.

What Does This Mean For Your Business?

As an anti-virus company, security and privacy are essential elements of Avast’s products and customer trust is vital to its brand and its image. Some users may be surprised that their supposedly ‘de-identified’ data was being sold to third parties anyway, but with a now widely-reported privacy risk of this kind and the potential damage that it could do to Avast’s brand and reputation, it is perhaps no surprise that is has acted quickly in closing Jumphot and distancing itself from what was happening. As Avast says in its announcement about the impending closure of Jumpshot (with the loss of many jobs) “The bottom line is that any practices that jeopardize user trust are unacceptable to Avast”.  PCMag has reported that it has been informed by Avast that the company will no longer be using any data from the browser extensions for any other purpose than the core security engine.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

£100m Fines Across Europe In The First 18 Months of GDPR

It has been reported that since the EU’s General Data Protection Regulation (GDPR) came into force in May 2018, £100m of data protection fines have been imposed on companies and organisations across Europe.

The Picture In The UK

The research, conducted by law firm DLA Piper, shows that the total fines imposed in the UK by the ICO stands at £274,000, but this figure is likely to be much higher following the finalising of penalties to be imposed on BA and Marriott.  For example, Marriott could be facing a £99 million fine for data breach between 2014 and 2018 that, reportedly involved up to 383 million guests, and BA (owned by IAG) could be facing a record-breaking £183 million for a breach of its data systems last year that could have affected 500,000 customers.

Also, the DLA Piper research shows that although the UK did not rankly highly in terms of fines, the UK ranked third in the number of breach notifications, with 22,181 reports since May 2018.  This equates to a relative ranking of 13th for data breach notifications per 100,000 people in the UK.

Increased Rate of Reporting

On the subject of breach notifications, the research shows a big increase in the rate of reporting, with 247 reports per day over the six months of GDPR between May 2018 and January 2019, which rose to 278 per day throughout last year. This rise in reporting is thought to be due to a much greater (and increasing) awareness about GDPR and the issue of data breaches.

France and Germany Hit Hardest With Fines

The fines imposed in the UK under GDPR are very small compared to Germany where fines totalled 51.1 million euros (top of the table for fines in Europe) and France where 24.6 million euros in fines were handed out.  In the case of France, much of the figure of fines collected relates to one penalty handed out to Google last January.

Already Strict Laws & Different Interpretations

It is thought that businesses in the UK having to meet the requirements of the already relatively strict Data Protection Act 1998 (the bones of which proved not to differ greatly from GDPR) is the reason why the UK finds itself (currently) further down the table in terms of fines and data breach notifications per 100,000 people.

Also, the EU’s Data Protection Directive wasn’t adopted until 1995, and GDPR appears to have been interpreted differently across Europe because it is principle-based, and therefore, apparently open to some level of interpretation.

What Does This Mean For Your Business?

These figures show that a greater awareness of data breach issues, greater reporting of breaches, and increased activity and enforcement action by regulators across Europe are likely to contribute to more big fines being imposed over the coming year.  This means that businesses and organisations need to ensure that they stay on top of the issue of data security and GDPR compliance.  Small businesses and SMEs shouldn’t assume that work done to ensure basic compliance on the introduction of GDPR back in 2018 is enough or that the ICO would only be interested in big companies as regulators appear to be increasing the number of staff who are able to review reports and cases.  It should also be remembered, however, the ICO is most likely to want to advise, help and guide businesses to comply where possible.