Police Crack Encrypted Network To Reach Crime Gangs

An international law enforcement operation has led to the cracking of the EncroChat Android phone network and the arrest of criminal gangs.

The Network

The France-based EncroChat network, which was discovered by the French National Gendarmerie in 2017, is an encrypted network for Android handsets with their GPS, camera and microphone functions disabled.  The handsets, which have reportedly sold for €1,000 each, plus €1,500 for a six-month contract have, until now, offered many criminals a secure, encrypted communications channel.  It has been reported that at the time the police were able to crack the channel, it had 10,000 users in the UK alone and a further 60,000 around Europe.

The Operation

“Operation Venetic”, the law enforcement operation to infiltrate and crack EncroChat involved French and Dutch police, the UK’s National Crime Agency (NCA) and Europol, the EU agency for law enforcement cooperation.  It has been reported that a team of over 500 NCA officers worked on Operation Venetic.

Arrests

The cracking of the network has, so far, reportedly led to the arrest of around 800 criminals across Europe.  It has been reported that two law enforcement officers were among those arrested.

The arrests have also netted the seizure of £54m in cash, 77 illegal firearms (including assault rifles and grenades), two tonnes of class A and B drugs, as well as 55 luxury cars and 73 luxury watches.

The Met

The Metropolitan Police were able to make a reported 171 arrests as part of the operation and to have seized £13.3m in cash. 

The Met reports on its website that those arrested in one investigation were “part of the most high-harm Organised Crime Group (OCG) in London, with long-standing links to violent crime and the importation of Class A drugs” where “central figures of this group lead lavish lifestyles and live in multi-million-pound properties with access to top of the range vehicles.”

Comparison Made To Enigma Code

Even though the circumstances and the resources available to the authorities are by no means the same, Nikki Holland, NCA director of investigations, highlighted the achievement and complexity of cracking the encrypted channel as being “akin to cracking the enigma code”.

Just The Beginning

Even though Commissioner Cressida Dick said that the operation was the most significant ever carried out against serious and organised criminality across London, she also described it “just the beginning” and highlighted the fact that there are now many more people being investigated as a result.

What Does This Mean For Your Business?

Organised crime of the scale and nature that has been tackled by Operation Venetic poses a threat to businesses and society through crime, its proceeds, and its many impacts. Although some luxury goods businesses and property companies have clearly benefitted from some sales, many of which may have been innocently made via legitimate-looking fronts, the lavish lifestyle of some of the criminals caught by this operation has come to an abrupt end.

UK home secretaries Amber Rudd and now Priti Patel have been critics of how end-to-end encryption has protected the guilty as well as the innocent in some apps and channels, and the fact that an encrypted channel has been cracked sends a powerful warning message to criminals who may assume they are safe in their communications.  It may also send a veiled message to other legitimate end-to-end encrypted apps and channels about the future, how global agencies are able to act, and what they are capable of doing.

Facial Recognition, Photo Identity and Privacy Protection

With phone cameras, surveillance cameras with facial recognition seemingly everywhere and the world entering a new phase of social change, many people are looking at how they can take simple steps to retain and protect their privacy rights.

Faces

As enshrined in data protection laws, such as GDPR, and with biometrics now being used widely, our faces are part of the personal data that we need to protect. Concerns, such as those expressed by the ICO’s head, Elizabeth Dunham, that police facial recognition systems have issues including accuracy are the reason for many to be looking at ways to protect themselves where necessary.

Public trust in facial recognition systems also still has some way to go as the technology progresses from what is now a relatively early stage.  For example, the results of a recent survey released by Monash University in Australia showed that half of Australians believe that their privacy is being invaded by the presence of facial recognition technology in public spaces.  Also, in the U.S., government researchers of the National Institute of Standards and Technology (NIST) have said (in May 2020) that not enough is being done to engender trust in any decisions made by facial recognition and biometrics systems, and in Europe in January, the European Commission was considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use could be put in place.

Protest Example

In a democracy such as the UK, protests are allowed take place for any number of issues, and the recent protests over the killing of George Floyd and in support of Black Lives Matter have brought into focus how to protect personal data and identity while exercising democratic rights.

For example, those wishing to obscure faces in their own protest photos that they share often use software to paint over faces, or use a mosaic blur technique because these cannot be reversed, rather than a simple blur effect which it is possible for authorities to de-blur using new neural networks.

This process of blocking out faces in photos can be carried out using the built-in photo editor on a smartphone.  For example:

– On iOS, open Photos, tap on the photo, select Edit (top right), tap the three dots to access Mark-up and use solid circles or squares to block out faces.

– On Android (using the native Mark-up tool), in the Photos app, select the photo, tap on Edit (bottom, second left), select Mark-up (bottom, second right), and block out faces e.g. using the Pen tool.

Removing Metadata

Removing the photo’s metadata (data stored in phone photos e.g. type of device and camera, date, time, location) can be achieved by taking screenshots the photos, and making sure that there are no other identifying features in the screenshot.

Masks and Facial Recognition

Tech and news commentators have noted recently how mask-wearing during the COVId-19 pandemic has proven to be a challenge for facial recognition systems, although it has also been suggested that AI facial recognition systems have now had the chance to have more ‘training’ in being able to identify mask-wearing people correctly.

What Does This Mean For Your Business?

Facial recognition (if used responsibly as intended) can help to fight crime in towns and city centres, thereby helping the mainly retail businesses that operate there, although there are still questions about its accuracy and its impact on our privacy and civil liberties.

Where sharing photos and worries about privacy is concerned, there are apps in place on smartphones that allow faces to be blocked out.  Also, when on Facebook, for example, not using a close up / clear photo of your face as a public profile picture, or revealing too much about where photos were taken, as well as not geotagging or posting photos that reveal your address or show valuable items at your home / where you keep valuables are also steps that can be taken to help retain your privacy and security.  Photos taken in the workplace, particularly those posted on websites and social media should also be vetted to ensure that there are no implications for physical security and that staff featured are happy to have the photo shared.

Featured Article – A Look At Cookies

Cookies perform functions and provide information that helps website users, businesses, publishers, and advertisers. This article looks at what cookies are, what they do, and the legislation that affects how they are used.

What Are Cookies?

Cookies are text files sent by the website you are on and stored on your browser as a record of your activity on the site. Although most websites use cookies, cookies do not harm devices and cookies do not tell websites who a user is or gather personal details about website visitors.

Current EU legislation states that all websites must let people know when cookies are in use. Website visitors should also be given the option to accept cookies or not and should be allowed to browse a website and experience the functionality even if they choose not to accept the cookies.

What Are Cookies For?

Cookies are supposed to help users to access a website more quickly and easily by telling a website that a visitor has been there before.  For example, cookies can store information that allows a repeat visitor to access a website without logging in, or fill in a form (autofill) without a person having to type all the details in. Cookies can also provide information to help with website shops, analytics and can help advertisers. 

Types of Cookies

There are several different types of website cookies. These include:

– First-party cookies. These are set by the website and are used for analytics data gathering (for analytics tools) e.g. the number of visitors, page views, pages visited, and sessions. These cookies provide data to publishers and advertisers for ad targeting.

– Third-Party Cookies. These cookies are used when other, third-party elements e.g. chatbots or social plugins have been added to a website. These cookies, set by domains, can track users, and save data that can be used in ad targeting and behavioural advertising.

– Session cookies, as the name suggests, are temporary, short-lived and expire immediately or shortly after a user leaves a web browser. They are commonly used by e-commerce websites to remember the items have been placed in the shopping cart, to keep users logged in, and to record user sessions to help with analytics.

– Persistent Cookies. These cookies must have a built-in expiration date but can stay on a user’s browser for years (or until a user manually deletes them) in order to track a user and their interaction with a website over time.

– Secure Cookies. Websites with HTTPS set secure cookies. These cookies have encrypted data and are used on payment/checkout pages of e-commerce websites or online banking websites.

What Is The ‘Cookie Law’?

The so-called ‘cookie law’, which began life as an EU Directive, is privacy legislation that requires websites to ask visitors for consent to store or retrieve information on a computer, smartphone, or tablet.

The Cookie Law was widely adopted in 2011, became an update to the UK’s Privacy and Electronic Communications Regulations, and was designed to make people aware of how the information about them is collected online and to give them the opportunity to say yes or no to it. 

The introduction of the General Data Protection Regulation (GDPR) in May 2018 with its focus on ensuring that businesses are transparent and protect individual privacy rights means that businesses must be able to prove clear and affirmative consent to process personal data and people must be able to opt-in rather than opt-out.  These aspects have clear implications for cookies.

GDPR Cookie Consent

GDPR requires consent to be gathered from data subjects and the Court Justice of the European Union rules state that this must consent must be explicit.  This means that a website’s users must be presented with a consent banner that is explicit and cannot have pre-checked boxes giving consent on categories of cookies except for those deemed strictly necessary.  Websites using cookies other than those that are strictly necessary for its basic function must present a method for obtaining the cookie consent of users prior to any collection or processing.

Website visitors must also be able to withdraw the consent that they have given before, in a way that is accessible, if they choose to. Also, the data controller must delete any personal data of individuals if that data is not necessary for the original stated purpose.

GDPR Cookie Compliance

One of the key ways in which a business can remain GDPR compliant is to make sure that it obtains prior consent if it provides service or collects personal data about persons in the EU. This means being very clear and explicit in describing the extent and purpose of the data processing in language that is easy-to-understand language to the user, before gathering any personal data from that user. Website users must be able to find out what type of personal data is being collected about them on a website at any time, and it should be easy for users to withdraw consent that has been previously given.

For this to happen, businesses and organisations need to know what kinds of cookies are used by their website and why. This information can be addressed in a cookie policy.

CCPA

For those businesses and organisations worldwide, that handle the personal information of any California residents, they will need to also ensure that their data processing (including cookie use) is compliant with the new California Consumer Privacy Act (CCPA).

A Cookie Policy

Companies and organisations are legally required under GDPR (and CCPA) to make a cookie policy available on their website to users. This cookie policy, which can be included as part of a website’s privacy policy, should be a declaration to users about what cookies are active on the website, what user data is being tracked by those cookies, for what purpose, and where in the world this data is sent.  This cookie policy must also give information about how users can opt-out of the cookies or change their settings regarding the cookies on the website.

Awareness and Challenges

Strengthening of data protection laws in recent years has, therefore, forced businesses to become very familiar with aspects of how they manage data in order to be legally compliant.  This has led to a much greater awareness of cookies and their use and for first-time visitors to a website, cookie consent is the first thing they encounter.

Also, changes that have led to many browsers blocking third party cookies have presented marketing and monetary challenges to publishers and advertisers.

Are Masks A Challenge To Facial Recognition Technology?

In addition to questions about the continued use of potentially unreliable and unregulated live facial recognition (LFR) technology, masks to protect against the spread of coronavirus may be presenting a further challenge to the technology.

Questions From London Assembly Members

A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick have asked whether the LFR technology could be withdrawn during the COVID-19 pandemic on the ground that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. 

Also, concerns are now being raised about how the already questionable accuracy of LFR could be challenged further by people wearing face masks to curb the spread of COVID-19.

Civil Liberties of Londoners

The two London Assembly members argue in the letter that a lack of laws, national guidelines,  regulations and debate about LFR’s use could mean that stopping Londoners or visitors to London “incorrectly, without democratic public consent and without clear justification erodes our civil liberties”.  The pair also said that this could continue to erode trust in the police, which has been declining anyway in recent years.

Inaccurate

The letter highlights concerns about the general inaccuracy of LFR. This is illustrated by the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified.

Others Concerns

Other concerns by the pair outlined in the letter about the continued deployment of LFR include worries about the possibility of mission creep, the lack of transparency about which watchlists are being used, worries that LFR will be used operationally at protests, demonstrations, or public events in future e.g. Notting Hill Carnival, and fears that the technology will continue to be used without clarity, accountability or full democratic consent

Masks Are A Further Challenge

Many commentators from both sides of the facial recognition debate have raised concerns about how the wearing of face masks could affect the accuracy of facial recognition technology.

China and Russia

It has been reported that Chinese electronics manufacturer Hanwang has produced facial recognition technology that is 95% accurate in identifying the faces of people who are wearing masks.

Also, in Moscow, where the many existing cameras have been deployed to help enforce the city’s lockdown and to identify those who don’t comply, systems have been able to identify those wearing masks.

France

In France, after the easing of lockdown restrictions, it has been reported that surveillance cameras will be used to monitor compliance with social distancing and the wearing of masks.  A recent trial in Cannes using French firm Datakalab’s surveillance software, which includes an automatic alert to city authorities and police for breaches of mask-wearing and social distancing rules looks set to be rolled out to other French cities.

What Does This Mean For Your Business?

Facial recognition is another tool which, under normal circumstances (if used responsibly as intended) could help to fight crime in towns and city centres, thereby helping the mainly retail businesses that operate there.  The worry is that there are still general questions about the accuracy of LFR, its impact on our privacy and civil liberties and that the COVId-19 pandemic could be used as an excuse to use it more and in a way that leads to mission creep. It does appear that in China and Russia for example, even individuals wearing face masks can be identified by facial recognition camera systems, although many in the west regard these as states where a great deal of control on the privacy and civil liberties population is exercised and may be alarmed at such systems being used in the UK.  The pandemic, however, appears to be making states less worried about infringing civil liberties for the time being as they battle to control a virus that has devastated lives and economies, and technology must be one of the tools being used in the fight against COVID-19.

Businesses Get Extra Time To Meet New Payment Processing Rules

The Financial Conduct Authority (FCA) has given UK businesses an extra 6 months to reach compliance with the new Strong Customer Authentication (SCA) rules for payment processing.

What Are The SCA Rules?

The SCA rules, introduced in 2019, are intended to the improve security of payments and limit fraud by making sure that whoever requests access to a person’s account or tries to make a payment, is the account holder or someone to whom the account holder has given consent.

These new rules, which come from the EU Payments Services Directive (PSD2), which came into effect in January 2018, mean that online payments of more than €50 will need two methods of authentication from the person making the payment e.g. password, fingerprint (biometric) or a phone number. This also means that online customers will not be able to check out using just a credit or debit card but will also need an additional form of identification.

Card Present

For normal ‘card present’ situations (not online) contactless will still be OK for ‘low value’ transactions of less than €50 at point-of-sale and Chip and PIN will still be suitable for values above €50.

Recurring Payments Exempt

Where a recurring payment of the same value is being made from a card to the same merchant e.g. subscriptions and memberships, the initial set up will require authentication, but subsequent transactions will be exempt.

Put Back

The first deadline for the implementation of the SCA rules was 14th September 2019 but this was put back by 18 months.

While the deadline for the implementation of SCA is still 31st December 2020 in the rest of the European Economic Area (EEA), in the UK, the FCA has now announced that, in order to help merchants who have been severely affected by the Covid-19 crisis the enforcement of SCA has now been delayed until 14th September 2021.

What Does This Mean For Your Business?

Most businesses would agree that high levels of online fraud are bad for everyone and just reduce consumer confidence, so if the introduction of new improved payment security measures can reduce fraud this will be helpful.  The COVID-19 crisis has, however, hit businesses very hard and for many, it’s been a case of simply trying to keep the business going, let alone worry about how they can comply with new payment rules in time.  This latest extension is, therefore, good news and should lessen the burden on merchants as the lockdown is lifted and the country tries to find the new normal in a post-COVID business environment.

Amazon Can Own Deliveroo Because of Pandemic

After the Competition and Markets Authority’s (CMA) worries last May, the CMA has now announced that Amazon can invest in food distribution company Deliveroo.

Last May

Last May, Amazon was a leading investor in a funding round of $575 million for UK-based food delivery company Deliveroo. At the time (17th May), Deliveroo’s founder and CEO, Will Shu, said of the $575M Series G preferred shares funding from Amazon and existing investors T. Rowe Price, Fidelity Management and Research Company, and Greenoaks, “This new investment will help Deliveroo to grow and to offer customers even more choice, tailored to their personal tastes, offer restaurants greater opportunities to grow and expand their businesses, and to create more flexible, well-paid work for riders.”

Amazon Restaurants

Amazon had previously operated its own ‘Amazon Restaurants’ food delivery service in London, but this was closed in December 2018 following strong competition from Deliveroo, Uber Eats, Just Eat, among and others. It was also reported that Amazon had previously tried two times to buy Deliveroo outright.

CMA Concerns

The Competition and Markets Authority’s (CMA), however, had concerns that the investment by Amazon in Deliveroo would be bad for competition and had launched its own investigation. The two main concerns expressed by the CMA were that:

– There were only a small number of companies that acted as the middleman between restaurants and customers and the Amazon/Deliveroo deal could have damaged competition in online restaurant food delivery by discouraging Amazon from re-entering the market in the UK i.e. re-entry by Amazon would have significantly increased competition in online restaurant food delivery in the UK.

– The CMA was concerned that the deal could have damaged competition in the emerging market for online convenience grocery delivery, where the 2 companies already had established market-leading positions.

COVID-19 Change

In the light of what the CMA says has been “a deterioration in Deliveroo’s financial position as a result of coronavirus (COVID-19)”, the CMA has now put aside its original concerns and provisionally cleared Amazon’s investment in Deliveroo. There will, however, be a three-week consultation period and a final decision will not be made until 11th June after all relevant feedback about the investment has been gathered (all submissions will need to be made by Monday 11th May 2020).

The CMA appears to have concluded that only Amazon would be able to provide the kind of funding that Deliveroo needs to meet its financial commitments in the extraordinary global circumstances caused by the pandemic.

Stuart McIntosh, Chair of the CMA’s independent inquiry group, said of that “some customers are cut off from online food delivery altogether, with others facing higher prices or a reduction in service quality. Faced with that stark outcome, we feel the best course of action is to provisionally clear Amazon’s investment in Deliveroo.”

What Does This Mean For Your Business?

For Deliveroo this is, of course, a great outcome at a crucial moment. The outcome also shows how the pandemic has had a dramatic effect on all aspects of business, including the decisions made regulators against a changed backdrop. The decision may also, as the CMA pointed out, be good news for customers, particularly those who are more “cut off” from their normal food supplies.

This decision is unlikely to be welcomed, however, by competitors such as Uber and Just Eat who saw-off Amazon’s move into the food delivery market in London last time.

Survey Reveals IR35 Tax Reforms Legal Action Risk For Private Sector Companies

A survey by ContractorCalculator has revealed that many private sector companies may be at risk of legal action through misinterpreting the new IR35 tax reforms.

What Is IR35?

The IR35 tax reform legislation, set to be introduced this April, is designed to stop tax avoidance from ‘disguised employment’, which occurs when self-employed contractors set up their own limited company to pay themselves through dividends (which are not subject to National Insurance).  IR35 will essentially mean that, from April 2020, medium-to-larger private sector organisations become responsible for determining whether the non-permanent contractors and freelancers should be taxed in the same way as permanent employees (inside IR35) or as off-payroll workers (outside IR35), based upon the work they do and how it is performed.

Also, the tax liability will transfer from the contractor to the fee-paying party i.e. the recruiter or the company that directly engages the contractor. HMRC hopes that the IR35 reforms will stop contractors from deliberately misclassifying themselves in order to reduce their employment tax liabilities.

The idea for the introduction of the legislation dates back to 1999 with Chancellor Gordon Brown and Chancellor Philip Hammond introduced IR35 for public bodies using contractors from April 2017.

National Insurance

One of the potential problem areas for private sector companies revealed by the ContractorCalculator questionnaire, answered by some 12,000 contractors, is that some may be unlawfully deducting employers’ national insurance contributions (NICs) from their contractors’ pay.  This means that they are effectively imposing double taxation on these contractors.

Given that 42% of contractors said they weren’t aware that such deductions were unlawful, the survey appears to show that although these companies have been acting unlawfully, it is likely to be because they have simply misinterpreted the new tax reforms given the complicated nature of the IR35.

Tribunal Threat

The survey also showed that 58% of survey participants are classified as ‘inside’ IR35 (taxed in the same way as permanent employees) said that they would consider taking their client to an employment tribunal because, if they have to pay the same amount of tax as a permanent employee, they feel that they should receive the same benefits as permanent employees e.g. sick pay and a pension.

Contractor Loses Case

On this subject, there was news this week that an IT contractor who had worked through his limited company Northern Light Solutions for Nationwide for several years and been treated as outside IR35 has lost an appeal to HMRC against a £70,000 tax demand whereby HMRC had argued, successfully, that he should have been categorised as inside IR35.

What Does This Mean For Your Business?

When the IR35 tax reforms were first announced, many business owners thought that the reforms appeared to be very complex and that not enough had been done by the government to raise awareness of the changes and to educate businesses and contractors about the implications and responsibilities.  This survey appears to support this and shows that this lack of knowledge and awareness of IR35 by businesses could be leaving them open to the risk of legal action.  Contactors and the companies that use their services need to learn quickly about the dangers of hiring freelance workers long-term and companies that use freelancers need to conduct correct due diligence in order to ensure that the business relationship they have with them complies with IR35.

Facebook Sued Down-Under For £266bn Over Cambridge Analytica Data Sharing Scandal

Six years after the personal data of 87 million users was harvested and later shared without user consent with Cambridge Analytica, Australia’s privacy watchdog is suing Facebook for an incredible £266bn over the harvested data of its citizens.

What Happened?

From March 2014 to 2015 the ‘This Is Your Digital Life’ app, created by British academic, Aleksander Kogan and downloaded by 270,000 people which then provided access to their own and their friends’ personal data too, was able to harvest data from Facebook.

The harvested data was then shared with (sold to) data analytics company Cambridge Analytica, in order to build a software program that could predict and use personalised political adverts (political profiling) to influence choices at the ballot box in the last U.S. election, and for the Leave campaign in the UK Brexit referendum.

Australia

The lawsuit, brought by the Australian Information Commissioner against Facebook Inc alleges that, through the app, the personal and sensitive information of 311,127 Australian Facebook Users (Affected Australian Individuals) was disclosed and their privacy was interfered with.  Also, the lawsuit alleges that Facebook did not adequately inform those Australians of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.  Furthermore, the lawsuit alleges that Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure.

In the lawsuit, the Australian Information Commissioner, therefore, alleges that the Australian Privacy Principle (APP) 6 has been breached (disclosing personal information for a purpose other than that for which it was collected), as has APP 11 (failing to take reasonable steps to protect the personal information from unauthorised disclosure).  Also, the Australian Information Commissioner alleges that these breaches are in contravention of section 13G of the Privacy Act 1988.

£266 Billion!

The massive potential fine of £266 billion has been arrived at by multiplying the maximum of $1,700,000 (£870,000) for each contravention of the Privacy Act by the 311,127 Australian Facebook Users (Affected Australian Individuals).

What Does This Mean For Your Business?

Back in July 2018, 16 months after the UK Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the UK’s ICO announced that Facebook would be fined £500,000 for data breaches.  This Australian lawsuit, should it not go Facebook’s way, represents another in a series of such lawsuits over the same scandal, but the £266 billion figure would be a massive hit and would, for example, totally dwarf the biggest settlement to date against Facebook of $5 billion to the US Federal Trade Commission over privacy matters.  To put it in even greater perspective, an eye-watering potential fine of £266 billion would make the biggest GDPR fine to date of £183 million to British Airways look insignificant. 

Clearly, this is another very serious case for Facebook to focus its attention on, but the whole matter highlights just how important data security and privacy matters are now taken and how they have been included in different national laws with very serious penalties for non-compliance attached. Facebook has tried hard since the scandal to introduce and publicise many new features and aspects of its service that could help to regain the trust of users in both its platform’s safeguarding of their details and in the area of stopping fake news from being distributed via its platform.  This announcement by the Australian Information Commissioner is, therefore, likely to be an extremely painful reminder of a regrettable and period in the tech giant’s history, not to mention it being a potential threat to Facebook.

For those whose data may have been disclosed, shared and used in a way that contravened Australia’s laws, they may be pleased that their country is taking such a strong stance in protecting their interests and this may send a very powerful message to other companies that store and manage the data of Australian citizens.

Dentist’s Legal Challenges To Anonymity of Negative Google Reviewer

ABC News in Australia has reported how a Melbourne dentist has convinced a Federal Court Judge to order tech giant Google to produce identifying information about a person who posted a damaging negative review about the dentist on Google’s platform.

What Happened?

The dentist, Dr Matthew Kabbabe, alleges that a reviewer’s comment posted on Google approximately three months ago advised others to “stay away” from his practice and that it damaged his teeth-whitening business and had a knock-on negative impact on his life.

Even though Google provides a platform to allow reviews to be posted in order to benefit businesses (if reviews are good), perhaps encourage and guide businesses to give good service, and to help Google users to decide whether to use a service, the comment was the only bad one on a page of five-star reviews. In addition to the possibly defamatory nature of the comment, Dr Kabbabe’s objection to the anonymity that Google offers comment posters, and that it could, as such be, something posted by a competitor or disgruntled ex-employee to damage his (or any other business) drove him to take the matter to the Federal Court after, it has been reported, his requests to Google to take the comment down were unsuccessful.

Landmark Ruling

Not only did Federal Court Judge Justice Bernard Murphy request that Google divulge identifying information about the comment poster, listed only a “CBsm 23″ (name, phone number, IP addresses, location metadata), but also the tech giant has been ordered to provide any other Google accounts (name and email addresses)  which are from the same IP address during the period of time in question.

Can Reply

Reviews posted on Google can be replied to by businesses as long as the replies comply with Google’s guidelines.

Dealing with some apparently unfair customer comments online is becoming more common for many businesses.  For example, hotels and restaurants have long struggled with how to respond to potentially damaging criticism left by customers on TripAdvisor. Recently, the owner of the Oriel Daniel Tearoom in Llangefni, Anglesey made the news when they responded to negative comments with brutal responses and threats of lifetime bans.

What Does This Mean For Your Business?

For the most part, potential customers are likely to be able to take a balanced view of comments that they read when finding out more about a business, but the fact that a Federal judge ruled in favour of not allowing those who have posted potentially damaging comments to hide behind online anonymity means that there may well be an argument for platforms to amend rules to try to redress the balance more in the favour of businesses.  It does seem unfair that, as in the case of the dentist, where the overwhelming majority of comments have been good, an individual, who may be a competitor or person with an axe to grind is allowed to anonymously and publicly publish damaging comments, whether justified or not, for a global audience to see and with no need to prove their allegations – something that would be subject to legal scrutiny in the offline world.  It will be interesting to see Google’s response to this ground-breaking ruling.

Google In Talks About Paying Publishers For News Content

It has been reported that Google is in talks with publishers with a view to buying in premium news content for its own news services to improve its relationship with EU publishers, and to combat fake news.

Expanding The Google News Initiative

Reports from the U.S. Wall Street Journal indicate that Google is in preliminary talks with publishers outside the U.S. in order expand its News Initiative (https://newsinitiative.withgoogle.com/), the program where Google works with journalists, news organisations, non-profits and entrepreneurs to ensure that fake news is effectively filtered out of current stories in the ‘digital age’.  Examples of big-name ‘partners’ that Google has worked with as part of the initiative include the New York Times, The Washington Post, The Guardian and fact-checking organisations like the International Fact-Checking Network and CrossCheck (to fact-check the French Election).

As well as partnerships, the Google News Initiative provides a number of products for news publishing e.g. Subscribe With Google, News on Google, Fact Check tags and AMP stories (tap-operated, full-screen content).

This Could Please Publishers

The move by Google to pay for content should please publishers, some of whom have been critical of Google and other big tech players for hosting articles on their platforms that attract readers and advertising money, but not paying to display them. Google has faced particular criticism in France at the end of last year after the country introduced a European directive that should have made tech giants pay for news content but in practice simply led to Google removing the snippet below links to French news sites, and removing the thumbnail images that often appear next to news results.  

Back in 2014 for example, Google closed its Spanish news site after it was required to pay “link tax” licensing fees to Spanish news sites and back in November 2018 Google would not rule out shutting down Google News in other EU countries if a “link tax” was adopted by them. 

Competitors

Google is also in competition with other tech giants who now provide their own fact-checked and moderated news services.  For example, back in October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources.

What Does This Mean For Your Business?

For European countries and European publishers, it is likely to be good news that Google is possibly coming to the table to offer some money for the news content that it displays on its platform, and that it may be looking for a way to talk about and work through some of the areas of contention.

For Google, this is an opportunity for some good PR in an area where it has faced criticism in Europe, an opportunity to improve its relationship with publishers in Europe, plus a chance to add value to its news service and to help Google to compete with other tech giants that also offer news services with the fake news weeded out.