Google Accused of ‘Stealing’ Android Data

Google is being sued for (allegedly) secretly using Android users’ cellular data allowances without user consent, to perform “passive” information transfers.

Passive Information Transfers

The lawsuit was filed by Taylor et al v. Google in a US federal district court in San Jose.  The group of Plaintiffs have filed a Class Action Complaint against Google which alleges that those who own Android mobile devices are having their costly cellular data plans used (secretly) by Google to enable its own surveillance activities.  The complainants say that Android user data plans are therefore the subject of “passive” information transfers which are not initiated by any action of the user and are performed without their knowledge and that these information transfers deprive users of data for which they, not Google, have paid.

Designed That way?

It is also alleged this secret transfer of users’ data allowances can happen when a user is within Wi-Fi range to avoid consuming cellular data allowances but that Google may have designed and coded its Android operating system and Google applications to indiscriminately take advantage of data allowances and passively transfer information at any time of the day, and even when Google apps have been put to the background, the programs have been closed completely, or location-sharing has been disabled.

Didn’t Sign Up To It?

The complainants are also arguing that they didn’t consent to the transfers and what amounts to subsidising of Google’s surveillance, and were given no warning or option to disable the transfers.  It is alleged that although users sign up to Terms of Service, the Privacy Policy, the Managed Google Play Agreement, and the Google Play Terms of Service, none of these documents explicitly disclose that Google spends users’ cellular data allowances for background transfers.

Android Lockbox

One explanation may be the Android Lockbox project, whereby Google collects Android user data to use within Google.  This has been in operation since 2013 and this project is understood to be a way for Google to collect information through its Mobile Services, apps and APIs, Chrome, Drive and Maps (pre-installed with Android) with a view to tracking usage of its own apps and to compare those apps with third-party and rival apps.

What Does This Mean For Your Business?

The matter is the subject of unproven allegations and a legal case and, as such, there is no official conclusion but based purely on the details in the complaint it does seem that it would unfair for customers to have paid for a certain quantity of data which may not only be technically unavailable when they need it due to background data transfers for Google’s own ends, but that there appears to have been no real consent or no way to stop it.  Some may say that Google appears to have ‘form’ in this area.  For example. Back in May, Google was the subject of a lawsuit by filed over allegations that Google illegally tracked Android users’ location without their consent and that this still happened when location tracking features had been manually disabled. It appears, therefore, that while big tech companies argue that data and information may be legitimately needed to improve services, users are increasingly conscious that there are ongoing issues concerning transparency, privacy and more that need to be monitored and questioned where necessary.

Featured Article – Data Breaches: The Fallout

In this article, we look not just at data breaches but also at the impact of data breaches on business and organisations.

Data Breaches

A personal data breach, as defined by the UK’s data watchdog and regulator, The Information Commissioner’s Office (ICO), is “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data.” This definition highlights, therefore, that a breach could be accidental or deliberate, and could be more than just about losing personal data.  For example, data breaches could occur through third-party access, accidentally sending data to the wrong recipient, devices being lost or stolen, altering data without permission or losing the availability of personal data.


As UK businesses should know, GDPR (the regulations of which will soon be brought into UK law, post-Brexit) alongside The Data Protection Act 2018 (DPA 2018), covers how companies and organisations collect and handle data. The ICO is the body in charge of enforcing these data regulations.

Under the current law, companies and organisations must notify the ICO of a breach within 72 hours of becoming aware of it, even if all the details are not available yet.

Data Breach Consequences For Businesses

There are many different consequences of a data breach for businesses, not all of which are legal. Examples of the effects that a data breach can have include:

– Loss of customer trust, leading to …

– Loss of customers and, therefore, a loss of income from those customers, and a gift to competitors (strengthening of competitors) as customers jump ship. One well-known example is the TalkTalk data breach back in 2015 where the hack of 155,000+ customer records resulted in the loss of 100,000+ customers within months. 

– Fines. For example, the ICO gave British Airways the biggest ever fine (to date) under GDPR at £183 million for a data breach where the personal details of 500,000 customers were accessed by hackers. In addition to the British Airways fines, other big company fine decisions arrived at by the ICO just in the second half of this year alone for data breaches include Ticketmaster UK Limited on 13 November 2020,  £1.25 million for failing to protect customers’ payment details, and Marriott International Inc on 30 October 2020, fined £18.4million for failing to keep millions of customers’ personal data secure.  GDPR sets a maximum fine of €20 million (£18 million) or 4% of annual global turnover (whichever is greater) for the most serious data breaches, although infringements don’t always result in fines.

– Loss of revenue. A big loss of customers means a big loss of revenue; the knock-on effects of which can be cuts and a loss of jobs.  In TalkTalk’s case, the initial financial hit was £15 million with exceptional costs of £40 to £45 million.

– Other (perhaps unconnected) areas of the business under the same brand being tarred with the same brush.

– Lost potential of future customers.

– Loss of upsell opportunities for other services.

– Falling share prices.

– Damage to reputation. For example, a CISO Benchmark Report (2020) showed that the number of organisations that reported reputational damage from data breaches has risen from 26 per cent to 33 per cent in the past three years. Taking Facebook’s Cambridge Analytica data-sharing scandal as an example, a survey by The Manifest (2019) showed that 44 per cent of social media users surveyed said that their view of Facebook had become more negative after Cambridge Analytica. Getting a bad reputation can now be exacerbated by the speed of communications, naming and shaming online (often on multiple websites) and the fact that bad news can hang around for a long time on the Internet and can be extremely difficult to remove, hide, or distract from.  Re-building reputation and trust can be a long and expensive process.

– Facing difficult questions, often in a very public setting e.g. Facebook questioned by the U.S. Congress, or a grilling by shareholders, and other business stakeholders.

– Costly disruption and downtime. Data breaches can bring the business to a standstill, and companies without business continuity or disaster recovery plans can suffer more serious financial and other consequences or could go out of business altogether (see below).

– The business being forced to close.  For example, a 2019 survey, commissioned by the National Cyber Security Alliance (U.S.) and conducted by Zogby Analytics found that 10 per cent of small businesses that suffered a data breach closed down and 25 per cent filed for bankruptcy.

– Lawsuits.  Carrying on with the example of Facebook’s Cambridge Analytica scandal, following a £500,000 ICO fine for data breaches, the social media giant was hit with a £266 billion lawsuit by the Australian Information Commissioner. There is also the possibility that in the event of a data breach, companies may incur huge costs by having to pay compensation to victims.

Damage to the supply chain. A loss of customers and bad publicity that hangs around for a long time can inflict damage to other businesses in the supply chain.  This, in turn, could lead to loss of alliances and synergies that helped create a product’s/service’s differentiation of source of competitive advantage.

– Loss of supplier trust. Many suppliers now prefer to do business with companies that are GDPR compliant as a way of helping to maintain their own compliance.  A serious data breach could not only damage or destroy current supplier relationships but could result in word getting around within the industry, thereby scuppering future value-adding relationships with some suppliers.

End-User Victims

Although the main focus of this article is the effects on businesses of data breaches, it should not be forgotten that end-users who have had their personal details stolen, lost, or compromised are also victims.  It should be remembered that many end-users still indulge in password sharing (using the same password for several websites and platforms) and using generally weak passwords. This can amplify the effects of one data breach through one company as their personal details are sold and shared among other cybercriminals who may use credential stuffing to access other websites for the same user.  Examples of the consequences that end-user data-breach victims can face include:

– Theft of bank details, money, and other personal details.

– Having to change multiple passwords and enact credit freezes.

– Fraud and extortion.

– Identity theft and complications resulting from this.

Biometric Data Breach

As the use of biometric data for verification is now gaining in popularity due its security advantages over passwords, the big problem with a breach of biometric data e.g. faces and fingerprints are that, unlike passwords and PINs, it can’t be changed.  This means that unless there are multiple forms of verification in a system, stolen biometric data could continue to be damaging for an end-users far into the future and could cause problems for companies that have invested heavily in a biometric system.  A recent example of a potential biometric data (fingerprint scanning) breach was where, back in August 2019, Suprema, a South Korea-based biometric technology company, and one of the world’s top 50 security manufacturers, was reported to have accidentally exposed more than one million fingerprints online after installing its standard Biostar 2 product on an open network. The bigger potential problem was that Biostar 2 is part of the AEOS access control system, which is used by over 5,700 organizations in 83 countries, including big multinational businesses, SMEs, governments, banks, and even the UK Metropolitan Police.

Dealing With A Data Breach

How a company deals with a data breach can make a big difference in the outcome for that company.  For example, a good approach to dealing with a data breach may be evaluating the situation, closing loopholes and removing the threat, then offering transparent and open communications e.g. notifying customers, notifying the ICO, issuing a public statement (on the website) and opening communication channels with customers (online chat, social media, telephone, and email).


Prevention is clearly the best way to avoid the negative effects of breaches and this requires assessing risks, putting data security and data privacy policies in place, training staff, keeping anti-virus, patches and fixes up to date, monitoring for new threats and potential risks, paying attention to staying GDPR compliant and much more.

Scammer Accidentally Calls Cyber-Crime Squad

A hapless scammer pretending to be from a broadband network got more than he bargained for when he accidentally called (and tried to work his scam) on the cyber-crime squad of an Australian police force.

Claimed To Be From Broadband Network

The scammer, claiming to be from Australia’s National Broadband Network (NBN), which does not make such calls to end-users, accidentally called the Financial and Cybercrime Investigation Branch (FCIB) of South Australia.  The purpose of the call appeared to direct the recipient to a website imitating the NBN website. Once there, the call recipient would be encouraged to download remote access software onto their computer with the ultimate aim of gaining access to personal information, including passwords for online banking details.

Tech Support Hoax

The caller, who is believed to have been part of a group of scammers calling Adelaide landlines, claimed to be a tech support person and that the call recipient needed to download software in order to fix an Internet problem (after an alleged hack).

Police Answered

Unfortunately for the scammer, the call was answered by a member of the FCIB who then used secure software in order to safely follow the caller’s instructions and thereby understand the true nature of the scam. 

Directed To A Poorly Designed Website

The member of the FCIB reported being directed by the scammer to a “poorly designed” website where they were told to carry out the steps needed to download software.  The FCIB member reported seeing that the fake website had Web-hosting text preceding the .com, thereby indicating that it was not affiliated with the NBN and was most probably a fake.

Following failed attempts by the scammer to convince the FCIB member to download the software (malware), the scammer terminated the call.

What Does This Mean For Your Business?

Luckily, in this case, the FCIB were able to see exactly how a group of scammers were operating and were able to issue detailed warnings in the local area.  This story is a reminder to all that no-one is safe from scam calls and that scammers using phishing and social engineering pose a serious risk.  Even though many businesses may know that legitimate companies do not call out of the blue and ask people to download software, all staff in an organisation should be made aware (e.g. by training) of the policy and procedures regarding this kind of risk (e.g. never to click on unfamiliar emails, links or to download unfamiliar software). Businesses should instruct staff that if they are in any doubt of who a caller is, hang up and only call the organisation back on the known, reputable number.  Incidents should, ideally, be reported to the police, Action Fraud, and to the relevant member of staff in the call recipient’s company.

That said, cyber-criminals are becoming more sophisticated in their attacks on businesses, and a Proofpoint Human Factor report from last year showed that as many as 99 per cent of cyber-attacks now involve social engineering through cloud applications, email or social media.  This illustrates how attackers are now much keener on trying to enable a macro, or trick people into opening a malicious file or follow a malicious link through human error, rather than having to face the considerable and time-consuming challenge of trying to hack into the (often well-defended) systems and infrastructure of enterprises and other organisations.  Businesses should therefore boost their efforts in guarding against this type of attack.

Are You Being Tracked By WhatsApp Apps?

A recent Business Insider Report has highlighted how third-party apps may be exposing some data and details of the activity of WhatsApp users.

WhatsApp – Known For Encryption

Facebook-owned WhatsApp is known for its end-to-end encryption.  This means that only the sender and receiver can read the message between them. 

In addition to being convenient, free, and widely used in business, the secure encryption that users also value has even been a target for concerned governments (including the UK’s) who have campaigned for a ‘back door’ to be built-in in order to allow at least some security monitoring.

Able To Exploit Online Signalling

If the Business Insider revelations are correct, however, third-party apps may already be making the usage of WhatsApp less secure than users may think.  The business news website has reported that third-party apps may be able to use WhatsApp’s online signalling feature to enable monitoring of the digital habits of anyone using WhatsApp without their knowledge or consent.  This could include tracking who users are talking to, when they are using their devices and even when they are sleeping.

Shoulder Surfing

Back in April, there were also media reports that hackers may be potentially able to use ‘shoulder surfing’ (spying in close proximity to another phone) and the knowledge of a user’s phone number to obtain an account restoration code from WhatsApp which could allow a user’s WhatsApp account to be compromised.

Numbers In Google Search Results

Also, back in June, an Indian researcher highlighted how WhatsApp’s ‘Click to Chat’ feature may not hide a user’s phone number in the link and that looking up “” in Google, at the time, allegedly revealed 300,000 users’ phone numbers through public Google search results.

Other Reports

Other reports questioning how secure the data of WhatsApp users really is have also focused on how, although messages may be secure between users in real-time, backups stored on a device or in the cloud may not be under WhatsApp’s end-to-end encryption protection.  Also, although WhatsApp only stores undelivered messages for 30 days, some security commentators have highlighted how the WhatsApp message log could be accessed through the chat backups that are sometimes saved to a user’s phone.


One potential signal that WhatsApp may not be as secure as users may think could be the fact that, back in February, The European Commission asked staff to use the SIgnal app instead of WhatsApp.

What Does This Mean For Your Business?

As may reasonably be expected from a widely used and free app owned by a tech giant, yes, WhatsApp collects usage and some other data from users, but it does still have end-to-end encryption. There are undoubtedly ways in which aspects of security around the app and its use could be compromised but for most business users it has become a very useful and practical business tool that has become central to how they regularly communicate with each other.  At the current time, WhatsApp’s owner, Facebook, appears to be busy concentrating much of its effort on competing with Zoom and on promoting its own desktop Messenger app free group video calls and chats that it has just launched. Even though WhatsApp is coming in for some criticism over possible security problems, the plan is still to grow the app, add more features, and keep it current and competitive which is why it recently ceased support for old operating systems.

Featured Article – Facial Recognition, Facial Authentication and the Future

Facial Recognition and facial authentication sound similar but there are distinct differences and this article takes a broad a look at how both are playing more of a role in our lives going forward. So firstly, what’s the difference?

Facial Recognition

This refers to the biometric technology system that maps facial features from a photograph or video taken of a person e.g. while walking in the street or at an event and then compares that with the information stored in a database of faces to find a match. The key element here is that the cameras are separate from the database which is stored on a server.  The technology must, therefore, connect to the server and trawl through the database to find the face.  Facial recognition is often involuntary i.e. it is being used somewhere that a person happens to go – it has not been sought or requested.

Facial recognition is generally used for purposes such as (police) surveillance and monitoring, crime prevention, law enforcement and border control.

Facial Authentication

Facial Authentication, on the other hand, is a “match on device” way of a person proving that they are who they claim to be.  Unlike facial recognition, which requires details of faces to be stored on a server somewhere, a facial authentication scan compares the current face with the one that is already stored (encrypted) on the device.  Typically, facial authentication is used by a person to gain access to their own device, account, or system.  Apple’s FaceID is an example of a facial authentication system. Unlike facial recognition, it is not something that involuntarily happens to a person but is something that a person actively uses to gain entry/access.

Facial recognition and facial authentication both use advanced technologies such as AI.

Facial Recognition – Advantages

The advantages of facial recognition technology include:

– Saving on Human Resources. AI-powered facial recognition systems can scan large areas, large moving crowds and can pick out individuals of interest, therefore, saving on human resources.  They can also work 24/7, all year round.

– Flexibility. Cameras that link to facial recognition systems can be set up almost anywhere, fixed in place or as part of mobile units.

– Speed. The match with a face on the database happens very quickly (in real-time), thereby enabling those on the ground to quickly apprehend or stop an individual.

– Accuracy – Systems are very accurate on the whole, although police deployments in the UK have resulted in some mistaken arrests.

Facial Recognition Challenges

Some of the main challenges to the use of facial recognition in recent times have been a lack of public trust in how and why the systems are deployed, how accurate they are (leading to possible wrongful arrest), how they affect privacy, and the lack of clear regulations to effectively control their use.

For example, in the UK:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns have also been raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks to curb the spread of COVID-19.

In the EU:

Back in January, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

In the U.S.

In 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s Rekognition software was racially biased after a trial in which it misidentified 28 black members of Congress.

In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Backlash and Tech Company Worries

The killing of George Floyd and of other black people in the U.S by police led to a backlash against facial recognition technology (FRT) and strengthened fears by big tech companies that they may, in some way be linked with its negative aspects. 

Even though big tech companies supply facial recognition software such as Amazon (Rekognition), Microsoft and IBM, some have not sold it to police departments pending regulation, but most have also had their own concerns for some years.  For example, back in 2018, Microsoft said on its blog that “Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses”.

With big tech companies keen to maintain an ethical and socially responsible public profile, follow-up on their previous concerns about problems with FRT systems and a lack of regulation,  and to distance themselves from the behaviour of police as regards racism/racial profiling or any connection to it e.g. by supplying FRT software, four big tech companies recently announced the following:

– Amazon has announced that it is implementing a one-year moratorium on police use of its FRT in order to give Congress enough time to implement appropriate rules. 

– After praising progress being made in the recent passing of “landmark facial recognition legislation” by Washington Governor Jay Inslee, Microsoft has announced that it will not sell its FRT to police departments until there is a federal law (grounded in human rights) to regulate its use.

– IBM’s CEO, Arvind Krishna, has sent a letter to the U.S. Congress with policy proposals to advance racial equality, and stating that IBM will no longer offer its general-purpose facial recognition or analysis software.

– Google has also distanced itself from FRT with Timnit Gebru, leader of Google’s ethical artificial intelligence team, commenting in the media about why she thinks that facial recognition is too dangerous to be used for law enforcement purposes at the current time.


The need to wear masks during the pandemic has proven to be a real challenge to facial recognition technology.  For example, recent research results from the US National Institute of Standards and Technology showed that even the most advanced facial recognition algorithms can only identify as little as between 2 and 50 per cent of faces when masks are worn.

Call For Clear Masks

There have been some calls for the development of clear or opaque masks or masks with a kind of ‘window’, as highlighted by the National Deaf Children’s Society, to help the 12 million people in the UK who are deaf or suffer from degrees of hearing loss e.g. to help with lip-reading, visual cues and facial expressions.  Some companies are now producing these masks e.g. the FDA-approved ‘Leaf’ transparent mask by Redcliffe Medical Devices in Michigan.  It remains to be seen how good facial recognition technology is at identifying people with a clear/opaque mask as opposed to a normal mask.

Authentication Challenges

The security, accuracy, and speed challenges of more traditional methods of authentication and verification have made facial authentication look like a more effective and attractive option.  For example, passwords can be stolen/cracked and 2-factor authentication can be less convenient and can be challenging if, for example, more than one user needs access to an account.

Facial Authentication Advantages

Some of the big advantages of facial authentication include:

– Greater Accuracy Assurance. The fact that a person needs to take a photo of their government-issued ID photo e.g. from a passport to match with a selfie + embedded 3D likeness detection to set up their facial authentication on their device means that it is likely to be accurate in identifying them.

– Ease of Use. Apple’s  Face ID is easy to use and is likely to become a preferred way of authentication by users.

– Better Fraud Detection. Companies and individuals using facial authentication may have a better chance of detecting attempted fraud (as it happens) than other current systems. Companies that use facial authentication with systems may, therefore, be able to more confidently assess risk, minimise fraud losses, and provide better protection for company and customer data.

– Faster For Users. Face ID, for example, saves time compared to other methods.

– Cross-Platform Portability. 3D face maps can be created on many devices with a camera, and users can enrol using a laptop webcam and authenticate from a smartphone or tablet. Facial authentication can, therefore, be used for many different purposes.

The Future – Biometric Authentication

The small and unique physical differences that we all have (which would be very difficult to copy) make biometric authentication something that’s likely become more widely used going forward.  For example, retinas, irises, voices, facial characteristics, and fingerprints are all ways to clearly tell one person from another. Biometric authentication works by comparing a set of biometric data that is preset by the owner of the device with a second set of biometric data that belongs to a device visitor. Many of today’s smartphones already have facial or fingerprint recognition.

The challenge may be, however, if biometric data is required for entry systems access that is not “on device” i.e. a comparison will have to be made with data stored on a server, thereby adding a possible security risk step.

Human Micro-Chipping

There may be times where we do not have access to our devices, where fast identification is necessary or where we may need to carry data and information that can’t be easily learned or remembered e.g. our medical files.  For these situations,  some people have presented the argument for human micro-chipping where microchip implants, which are cylindrical ‘barcodes’, which can be scanned through a layer of skin to transmit a unique signal.

Neuralink Implant

Elon’s Musk’s Neuralink idea to create an implantable device that can act as an interface between the human brain and a computer could conceivably be used in future for advanced authentication methods.

Looking Forward

The benefits of facial recognition e.g. for crime prevention and law enforcement are obvious but for the technology to move forwards, regulatory hurdles, matters of public trust and privacy, and technical challenges e.g. bias, and accuracy need to be overcome.

Facial authentication provides a fast, accurate and easy way for people to prove that they are who they say they are.  This benefits both businesses and their customers in terms of security, speed and convenience as well as improving efficiency.

More services where sensitive data is concerned e.g. financial and medical services, and government agency interactions are likely to require facial authentication in the near future.

Cyber Security Top of List for Digital Transformation

A recent survey appears to have shown that changes brought by the pandemic have meant that IT buyers from companies working on digital transformation now value cybersecurity the most.


The survey, conducted among IT business leaders attending the all-virtual Digital Transformation Expo (DTX), DTX: NOW this month showed that 26 per cent of respondents put IT security at the top of their digital transformation list. A close second place was the cloud at 21 per cent.

Pandemic Accelerated Digital Transformation

As shown in survey results published last month by Studio Graphene, the need to quickly shift staff to working from home because of the lockdown appeared to be a driver and an accelerator of digital transformation for businesses.  The survey showed that nearly half (46 per cent) of business leaders said that said Covid-19 had driven the most pronounced digital transformation that their businesses had experienced.


The distribution of the workforce/staff working from home which the pandemic lockdown caused has meant that not only have businesses have been forced to adapt their cloud strategy, but also their cybersecurity measures, and their business cultures to ensure that their businesses function as well as possible.

Challenges and Gains

The survey found that the biggest challenges to digital transformation projects were identified as being changes in the scope, reduced budgets, and changes in team structures.  At the same time, the survey results revealed that the need to ensure that all employees could work from home revealed IT issues that may not otherwise have been addressed, thereby helping the business to modernise and realise which areas needed investment going forward.

New Ways of Working

With further restrictions, local lockdowns, the possibility of new, stricter restrictions ahead and a decidedly uncertain near future for traditional office-based working, the pandemic has driven diversification of work methods and structures. Flexible, smarter, hybrid working, involving different location looks to be a reality for businesses as we try to gain more control in an increasingly unpredictable world and businesses environment.

What Does This Mean For Your Business?

The results of the survey appear to support the idea that necessity has driven digital transformation.  The pandemic lockdown has been a catalyst that has moved many aspects of businesses forward and led them to clearly and quickly see the importance of cybersecurity, where weaknesses are, where investment is needed next and has shown them that new, more flexible models of work can benefit employer and employee.  Whilst changes have been difficult, and people and their organisations have been forced to adapt to changes quickly, the lessons learned in digital transformation may have boosted confidence within organisations that they have the in-built flexibility, creativity, experience and ability to weather the storm and reinvent how they work according to prevailing conditions.

Featured Article – The Challenge of User Access Permissions

Employees being given too much access to privileged, sensitive company data can put an organisation in danger.  In this article, we explore the issues around this subject and how businesses can minimise the risk.


In a recent survey of 900 IT professionals commissioned by IT security firm Forcepoint, it was revealed that 40 per cent of commercial sector respondents and 36 per cent of public sector respondents said they had privileged access to sensitive company data through their work.  Also, 38 per cent of private sector and 36 per cent of public sector respondents said that they did not need the amount of access they were given to complete their jobs.  The same survey showed that 14 per cent of respondents believed that their companies were unaware of who had what access to sensitive data.

The results of this survey confirm existing fears that by not carefully considering or being able to allocate only the necessary access rights to employees, companies may be leaving open a security loophole.

Risks and Threats

The kinds of risks and threats that could come from granting staff too many privileges in terms of sensitive data access include :

Insider Threats

Insider threats can be exceedingly difficult to detect and exact motives vary but the focus is generally to gain access to critical business assets e.g. people, information, technology, and facilities.  Insiders may be current or former full-time employees, part-time employees, temporary employees, contractors/third parties, and even trusted business partners. The insider may be acting for themselves or for a third party.  Information or data taken could be sold e.g. to hackers or to representatives of other organisations/groups or used for extortion/blackmail. An insider could also use their access for sabotage, fraud, social engineering or other crimes. An insider could also cause (unintentional) damage.

The insider threat has become more widely recognised in recent years and in the U.S., for example, September is National Insider Threat Awareness Month (NIATM).

Intrusions From Curiosity

The digitisation of all kinds of sensitive information, and digital transformation, coupled with users being given excessive access rights, has led to intrusions due to curiosity, which can lead to a costly data breach.  One example is in the health sector where, in the U.S., data breaches occur at the rate of one per day (Department of Health and Human Services’ Office for Civil Rights figures).  Interestingly, Verizon figures show that almost 60 per cent of healthcare data breaches originate from insiders.

Accidental Data Sharing

Some employees may not be fully aware of company policies and rules, particularly at a time when the workforce has been dispersed to multiple locations during the lockdown.  A 2019 Egress survey, for example, revealed that 79 per cent of employers believe their employees may have accidentally shared data over the last year and that 45 per cent sent data to the wrong person by email. Unfortunately, the data shared or sent to the wrong person may have been sensitive data that an individual did not need to have access to in order to do their job.


If hackers and other cybercriminals are able to obtain the login credentials of a user that has access rights to sensitive data (beyond what is necessary) this can provide relatively easy access to the company network and its valuable data and other resources.  For example, cybercriminals could hack or find lost devices or storage media, use social engineering, or use phishing or other popular techniques to get the necessary login details.

How Does It Happen?

The recent Forcepoint and the Ponemon Institute survey showed that 23 per cent of IT pros believe that privileged access to data and systems are given out too easily.  The survey results suggest that employees can end up having more access rights than they need because:

– Companies have failed to revoke rights when an employee’s role has changed. 

– Some organisations have assigned privileged access for no apparent reason.

– Some privileged users are being pressured to share access with others.

How To Stop It

Stopping the allocation of too many privileged access rights may be a holistic process that considers many different aspects and activity from multiple sources, including:

– Incident-based security tools. Although these can alert the organisation to potential problems and can register logs and configuration changes, they can also give false positives and it can take a prohibitively long time to fully review the results, find and plug the breach.

– Trouble tickets and badge records.

– Reviews of keystroke archives and video.

– User and entity behaviour analytics tools.

– The challenge is that many organisations lack the time, resources, and expertise to piece all these elements together in a meaningful way.

Looking Forward

It appears that where there is a disconnect between IT managers and staff, and where access rights are not regularly monitored or checked, a whole business or organisation can end up being in danger. Some security commentators suggest that the answer lies in easy-to-use technology that incorporates AI to help monitor how data flows and is shared to bring about the necessary visibility as regards who has access and what they’re doing with that access.  Always seeking verification and never acting simply on trust is a key way in which organisations can at least detect malicious activity quickly.

Amazon Review Fraud

An FT investigation appears to have uncovered an estimated 20,000 Amazon reviews where the reviewers were suspected of being paid to give a five-star rating.


The Competition and Markets Authority (CMA) in the UK estimates that £23 billion a year of UK consumer spending is potentially influenced by online reviews.  This makes them especially important to the many businesses selling through platforms like Amazon.

The CMA also acknowledges that there are practices that can breach the Consumer Protection from Unfair Trading Regulations 2008 (CPRs) and UK Advertising Codes and that these practices may prevent consumers from choosing the product or service that best suits their needs.  These can include businesses writing or commissioning fake positive reviews about themselves, businesses or individuals writing or commissioning fake negative reviews, review sites ‘cherry-picking’ positive reviews, or suppressing negative reviews and review sites’ moderation processes possibly causing some genuine negative reviews not to be published.

One recently reported trend appears to be the use of multiple one-star feedback reviews for products on Amazon, perhaps funded by competitors, as a form of manipulation.


The FT investigation, which has led to the deletion by Amazon of the 20,000 allegedly fraudulent reviews reportedly showed the UK’s top Amazon reviewer appeared to have left an average of one five-star rating on the platform every four hours in August.

How Can It Happen?

The process that leads to fraudulent/fake five-star reviews may begin with companies meeting reviewers on social networks (e.g. in Facebook groups) or messaging apps, reviewers receiving free product samples, a good review being left on the platform and the reviewer then being given a refund on the product that was free to them in the first place.

Not New

Fake and fraudulent reviews are not new. Back in July, The Markup claimed to have found suspicious-looking reviews on Amazon and back in December 2019 a Daily Mail report claimed that Marketing firms were selling positive reviews on Amazon. At the time, Amazon said that it had already taken legal action against some firms for this.

Amazon’s Efforts

In the UK, fake reviews fall under UK consumer protection law and last year alone, Amazon is reported to have spent 400 million (US dollars) to protect customers from reviews abuse, fraud, and other forms of misconduct.  It has also bee reported that Amazon monitors around 10 million review submissions each week before they go public in an attempt to protect buyers and the credibility of its own review system.  

How To Spot Fake Reviews

Ways to spot fake review include using services like free site ‘Fakespot’ where users can copy and paste a link to a product page, then click Analyse to show any evidence of fake reviews.

What Does This Mean For Your Business?

Amazon is a vital sales platform for many businesses and, with the power that good reviews have in increasing sales and bad or low star reviews have in deterring sales, it is clear to see how some businesses may be tempted to resort to paid-for manipulation as a competitive tactic. Consumers, Amazon itself, and businesses that are affected by unfair reviews all lose out where fake/fraudulent/manipulated reviews can slip through the vetting process. Many people feel, therefore, that Amazon and other platforms (e.g. social media) need to work together and increase the effort, investment, technology, and creative thinking that could deliver a much improved or an innovative and new review system.

Toyota To Upload Your Data To Amazon

Toyota will use Amazon’s (AWS) cloud services for a Mobility Services Platform that will upload data from Toyota cars globally that could be used for monetised services.

Data Gathering From Fleet

Toyota’s platform in Amazon’s cloud will gather data from the Data Communication Module (DCM) in each car in Toyota’s global fleet which could be used to help Toyota’s vehicle design and development, but more crucially for services such as ride and car sharing, full-service lease, or behaviour-based (custom) insurance and proactive maintenance notifications.  The use of Amazon’s cloud tech and professional services by Toyota and the sharing of data is intended to help the move towards CASE (Connected, Autonomous/Automated, Shared and Electric) mobility technologies.


Shigeki Tomoyama, Chief Information & Security Officer and Chief Production Officer at Toyota Motor Corporation said that in expanding Toyota’s relationship with AWS, “Connectivity drives all of the processes of development, production, sales and service in the automotive business. Expanding our agreement with AWS to strengthen our vehicle data platform will be a major advantage for CASE activities within Toyota.”

Leveraging Data and AWS Services

Andy Jassy, CEO of AWS, said that “Toyota is leveraging the unmatched breadth and depth of AWS services to transform how it develops and manages new mobility services across its entire ecosystem of connected vehicles around the world,” and “By running on AWS, with its high performance, functionality, and security, Toyota is able to innovate quickly across its enterprise and continue to lead the automotive industry in delivering the quality of experiences that customers expect.”

AWS and Other Automotive Companies

AWS is already working with other big car manufacturers such as Germany’s Volkswagen AG on its cloud-based software and data portal and has worked with transportation providers such as Uber and Avis, and self-driving heavy truck company Embark.

What Does This Mean For Your Business?

The vehicle market is changing and the ability to gather data about how vehicles are used looks likely to be another key way in which vehicle manufacturers can compete and to create and target more monetised and value-adding services.  It will also create new opportunities for associated industries e.g. the ability to create customised insurance for drivers. For drivers, there may be more cause for concern in terms of data protection and privacy if their identity is linked with car data e.g. for custom insurance, as theft of this data could have negative implications.

AI-Faked Photos and Videos Concerns

Social media analytics company Graphika has reported identifying images of faces for social media profiles that appear to have been faked using machine learning for the purpose of China-based anti-U.S. government campaigns.

Graphika Detects

Graphika, which advertises a “Disinformation and Cyber Security” service (whereby it can detect strategic influence on campaigns) has reported detecting AI-generated fake profile pictures and videos that were being used to attack American policy and the administration of U.S. President Donald Trump in June, at a time when the rhetoric between the United States and China had escalated.

The Graphika website has posted a 34-page file online detailing the findings of what it is calling the “Spamouflage” campaign.  See:

Spamouflage Dragon

The China-based network that, according to Graphika, has been making and spreading the anti-U.S. propaganda material via social media has been dubbed “Spamouflage Dragon”.  Graphika says that Spamouflage Dragon’s political disinformation campaigns started in 2019, focusing on attacking the Hong Kong protesters and exiled Chinese billionaire Guo Wengui (a critic of the Chinese Communist Party), and more recently focused on the U.S and the Trump administration.

Two Differences This Time

The two big differences in Spamouflage Dragon’s anti-U.S. campaign compared to its anti-Hong Kong protester campaign appear to be:

1. The use of English-language content videos, many of which appear to have been made in less than 36 hours.

2. The use of AI-generated profile pictures that appear to have been made by using Generative Adversarial Networks (GAN).  This is a class of machine-learning frameworks that allows computers to generate synthetic photographs of people.

Faked Profile Photos and Videos

Graphika reports that Spamouflage Dragon’s U.S. propaganda attacks have taken the form of:

– AI-generated photos used to create fake followers on Twitter and YouTube.  The photos, which were made up by GAN from a multitude of images that have been taken from stolen profile photos from different social media networks were recognisable as fake because they all had the same blurred-out background, asymmetries where there should be symmetries, and the eyes of the subjects were all looking straight ahead.

– Videos made in English, and targeting the United States, especially its foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok.  The videos were easily identified as fake due to being clumsily made with language errors and automated voice-overs.

What Does This Mean For Your Business?

With a presidential election just around the corner in the U.S. and with escalating tensions between the super-power nations, the fact that these videos, AI-generated photos and their fake accounts can be so quickly and easily produced is a cause for concern in terms of their potential for political influence and interference.

For businesses, the use of this kind of technology could be a cause for concern if used as part of a fraud or social engineering attack. Criminals using AI-generated fake voices, photos and videos to gain authorisation or to obtain sensitive data is a growing threat, particularly for larger enterprises.