Are You Being Tracked By WhatsApp Apps?

A recent Business Insider Report has highlighted how third-party apps may be exposing some data and details of the activity of WhatsApp users.

WhatsApp – Known For Encryption

Facebook-owned WhatsApp is known for its end-to-end encryption.  This means that only the sender and receiver can read the message between them. 

In addition to being convenient, free, and widely used in business, the secure encryption that users also value has even been a target for concerned governments (including the UK’s) who have campaigned for a ‘back door’ to be built-in in order to allow at least some security monitoring.

Able To Exploit Online Signalling

If the Business Insider revelations are correct, however, third-party apps may already be making the usage of WhatsApp less secure than users may think.  The business news website has reported that third-party apps may be able to use WhatsApp’s online signalling feature to enable monitoring of the digital habits of anyone using WhatsApp without their knowledge or consent.  This could include tracking who users are talking to, when they are using their devices and even when they are sleeping.

Shoulder Surfing

Back in April, there were also media reports that hackers may be potentially able to use ‘shoulder surfing’ (spying in close proximity to another phone) and the knowledge of a user’s phone number to obtain an account restoration code from WhatsApp which could allow a user’s WhatsApp account to be compromised.

Numbers In Google Search Results

Also, back in June, an Indian researcher highlighted how WhatsApp’s ‘Click to Chat’ feature may not hide a user’s phone number in the link and that looking up “site:wa.me” in Google, at the time, allegedly revealed 300,000 users’ phone numbers through public Google search results.

Other Reports

Other reports questioning how secure the data of WhatsApp users really is have also focused on how, although messages may be secure between users in real-time, backups stored on a device or in the cloud may not be under WhatsApp’s end-to-end encryption protection.  Also, although WhatsApp only stores undelivered messages for 30 days, some security commentators have highlighted how the WhatsApp message log could be accessed through the chat backups that are sometimes saved to a user’s phone.

Signal?

One potential signal that WhatsApp may not be as secure as users may think could be the fact that, back in February, The European Commission asked staff to use the SIgnal app instead of WhatsApp.

What Does This Mean For Your Business?

As may reasonably be expected from a widely used and free app owned by a tech giant, yes, WhatsApp collects usage and some other data from users, but it does still have end-to-end encryption. There are undoubtedly ways in which aspects of security around the app and its use could be compromised but for most business users it has become a very useful and practical business tool that has become central to how they regularly communicate with each other.  At the current time, WhatsApp’s owner, Facebook, appears to be busy concentrating much of its effort on competing with Zoom and on promoting its own desktop Messenger app free group video calls and chats that it has just launched. Even though WhatsApp is coming in for some criticism over possible security problems, the plan is still to grow the app, add more features, and keep it current and competitive which is why it recently ceased support for old operating systems.

Featured Article – Facial Recognition, Facial Authentication and the Future

Facial Recognition and facial authentication sound similar but there are distinct differences and this article takes a broad a look at how both are playing more of a role in our lives going forward. So firstly, what’s the difference?

Facial Recognition

This refers to the biometric technology system that maps facial features from a photograph or video taken of a person e.g. while walking in the street or at an event and then compares that with the information stored in a database of faces to find a match. The key element here is that the cameras are separate from the database which is stored on a server.  The technology must, therefore, connect to the server and trawl through the database to find the face.  Facial recognition is often involuntary i.e. it is being used somewhere that a person happens to go – it has not been sought or requested.

Facial recognition is generally used for purposes such as (police) surveillance and monitoring, crime prevention, law enforcement and border control.

Facial Authentication

Facial Authentication, on the other hand, is a “match on device” way of a person proving that they are who they claim to be.  Unlike facial recognition, which requires details of faces to be stored on a server somewhere, a facial authentication scan compares the current face with the one that is already stored (encrypted) on the device.  Typically, facial authentication is used by a person to gain access to their own device, account, or system.  Apple’s FaceID is an example of a facial authentication system. Unlike facial recognition, it is not something that involuntarily happens to a person but is something that a person actively uses to gain entry/access.

Facial recognition and facial authentication both use advanced technologies such as AI.

Facial Recognition – Advantages

The advantages of facial recognition technology include:

– Saving on Human Resources. AI-powered facial recognition systems can scan large areas, large moving crowds and can pick out individuals of interest, therefore, saving on human resources.  They can also work 24/7, all year round.

– Flexibility. Cameras that link to facial recognition systems can be set up almost anywhere, fixed in place or as part of mobile units.

– Speed. The match with a face on the database happens very quickly (in real-time), thereby enabling those on the ground to quickly apprehend or stop an individual.

– Accuracy – Systems are very accurate on the whole, although police deployments in the UK have resulted in some mistaken arrests.

Facial Recognition Challenges

Some of the main challenges to the use of facial recognition in recent times have been a lack of public trust in how and why the systems are deployed, how accurate they are (leading to possible wrongful arrest), how they affect privacy, and the lack of clear regulations to effectively control their use.

For example, in the UK:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns have also been raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks to curb the spread of COVID-19.

In the EU:

Back in January, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

In the U.S.

In 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s Rekognition software was racially biased after a trial in which it misidentified 28 black members of Congress.

In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Backlash and Tech Company Worries

The killing of George Floyd and of other black people in the U.S by police led to a backlash against facial recognition technology (FRT) and strengthened fears by big tech companies that they may, in some way be linked with its negative aspects. 

Even though big tech companies supply facial recognition software such as Amazon (Rekognition), Microsoft and IBM, some have not sold it to police departments pending regulation, but most have also had their own concerns for some years.  For example, back in 2018, Microsoft said on its blog that “Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses”.

With big tech companies keen to maintain an ethical and socially responsible public profile, follow-up on their previous concerns about problems with FRT systems and a lack of regulation,  and to distance themselves from the behaviour of police as regards racism/racial profiling or any connection to it e.g. by supplying FRT software, four big tech companies recently announced the following:

– Amazon has announced that it is implementing a one-year moratorium on police use of its FRT in order to give Congress enough time to implement appropriate rules. 

– After praising progress being made in the recent passing of “landmark facial recognition legislation” by Washington Governor Jay Inslee, Microsoft has announced that it will not sell its FRT to police departments until there is a federal law (grounded in human rights) to regulate its use.

– IBM’s CEO, Arvind Krishna, has sent a letter to the U.S. Congress with policy proposals to advance racial equality, and stating that IBM will no longer offer its general-purpose facial recognition or analysis software.

– Google has also distanced itself from FRT with Timnit Gebru, leader of Google’s ethical artificial intelligence team, commenting in the media about why she thinks that facial recognition is too dangerous to be used for law enforcement purposes at the current time.

Masks

The need to wear masks during the pandemic has proven to be a real challenge to facial recognition technology.  For example, recent research results from the US National Institute of Standards and Technology showed that even the most advanced facial recognition algorithms can only identify as little as between 2 and 50 per cent of faces when masks are worn.

Call For Clear Masks

There have been some calls for the development of clear or opaque masks or masks with a kind of ‘window’, as highlighted by the National Deaf Children’s Society, to help the 12 million people in the UK who are deaf or suffer from degrees of hearing loss e.g. to help with lip-reading, visual cues and facial expressions.  Some companies are now producing these masks e.g. the FDA-approved ‘Leaf’ transparent mask by Redcliffe Medical Devices in Michigan.  It remains to be seen how good facial recognition technology is at identifying people with a clear/opaque mask as opposed to a normal mask.

Authentication Challenges

The security, accuracy, and speed challenges of more traditional methods of authentication and verification have made facial authentication look like a more effective and attractive option.  For example, passwords can be stolen/cracked and 2-factor authentication can be less convenient and can be challenging if, for example, more than one user needs access to an account.

Facial Authentication Advantages

Some of the big advantages of facial authentication include:

– Greater Accuracy Assurance. The fact that a person needs to take a photo of their government-issued ID photo e.g. from a passport to match with a selfie + embedded 3D likeness detection to set up their facial authentication on their device means that it is likely to be accurate in identifying them.

– Ease of Use. Apple’s  Face ID is easy to use and is likely to become a preferred way of authentication by users.

– Better Fraud Detection. Companies and individuals using facial authentication may have a better chance of detecting attempted fraud (as it happens) than other current systems. Companies that use facial authentication with systems may, therefore, be able to more confidently assess risk, minimise fraud losses, and provide better protection for company and customer data.

– Faster For Users. Face ID, for example, saves time compared to other methods.

– Cross-Platform Portability. 3D face maps can be created on many devices with a camera, and users can enrol using a laptop webcam and authenticate from a smartphone or tablet. Facial authentication can, therefore, be used for many different purposes.

The Future – Biometric Authentication

The small and unique physical differences that we all have (which would be very difficult to copy) make biometric authentication something that’s likely become more widely used going forward.  For example, retinas, irises, voices, facial characteristics, and fingerprints are all ways to clearly tell one person from another. Biometric authentication works by comparing a set of biometric data that is preset by the owner of the device with a second set of biometric data that belongs to a device visitor. Many of today’s smartphones already have facial or fingerprint recognition.

The challenge may be, however, if biometric data is required for entry systems access that is not “on device” i.e. a comparison will have to be made with data stored on a server, thereby adding a possible security risk step.

Human Micro-Chipping

There may be times where we do not have access to our devices, where fast identification is necessary or where we may need to carry data and information that can’t be easily learned or remembered e.g. our medical files.  For these situations,  some people have presented the argument for human micro-chipping where microchip implants, which are cylindrical ‘barcodes’, which can be scanned through a layer of skin to transmit a unique signal.

Neuralink Implant

Elon’s Musk’s Neuralink idea to create an implantable device that can act as an interface between the human brain and a computer could conceivably be used in future for advanced authentication methods.

Looking Forward

The benefits of facial recognition e.g. for crime prevention and law enforcement are obvious but for the technology to move forwards, regulatory hurdles, matters of public trust and privacy, and technical challenges e.g. bias, and accuracy need to be overcome.

Facial authentication provides a fast, accurate and easy way for people to prove that they are who they say they are.  This benefits both businesses and their customers in terms of security, speed and convenience as well as improving efficiency.

More services where sensitive data is concerned e.g. financial and medical services, and government agency interactions are likely to require facial authentication in the near future.

Influencers Paid To Promote NHS Test and Trace

In a bid to raise awareness of responsible behaviour concerning COVID-19 among the younger age groups, the UK government is reported to be paying freelance social media influencers and reality TV stars to promote test and trace.

Test and Trace

Test and trace in the UK is branded as NHS but is actually outsourced to private companies and uses a network of commercial testing labs, drive-through centres, and call centres.  The idea of the service, in the absence of an effective app (the UK’s app trialled in the Isle of Wight and failed after it didn’t work on Apple devices and £11 million had been spent), is to enable the identification and contacting of people who may have been unknowingly in close contact with a COVID-positive person e.g. in a restaurant.

Problem

Even though government schemes (e.g. eat out to help out and other messages) have promoted a return to restaurants and other hospitality businesses, the current narrative focuses on young people as mostly potential asymptomatic spreaders who may not be as concerned about the impact of their behaviour on the wider population.  As such, getting the message to them that they must get tests if they have symptoms and self-isolate if contacted are deemed to be especially important.  Other challenges include the fact that the test and trace service is also reported to be failing to deliver, there appears to be a reluctance among many people to share their contact details, and there is a growing weariness of and dislike pandemic restrictions being imposed, changed and re-imposed.

Freelance Influencers and Reality TV Stars

Younger age groups that have grown up with social media and reality TV are known to be susceptible to messages by social media influencers and reality TV stars.  This is believed to be because:

– Social media influencers are perceived as being more like their audience, sharing more of their experiences and therefore more ‘authentic’ (perhaps unlike more guarded celebrity behaviour), often encouraging engagement with social issues (adding to their credibility) and being able to forge a stronger more engaging direct relationship with followers i.e. they are trusted, and they are young.

– Reality TV stars are perceived to be ‘ordinary’ (just like their audience), they are open, spontaneous and outspoken (like their young audience) and they appear familiar and almost friend-like to their young admirers i.e. there is a perceived relationship with the star.

– Social media influencers have massive reach.  Individual influencers can have millions of followers.

Proven

Social media influencers and reality TV stars have a proven record of boosting sales of products in, for example, the fashion and beauty industries through their endorsements.

Who and How Much?

The UK government is reported to have enlisted the services of Love Island stars Shaughna Phillips, Josh Denzel and Chris Hughes.  It is likely that a social media influencer with a large following could be paid thousands of pounds for a single post.

What Does This Mean For Your Business?

Businesses in the beauty and fashion industries know how important reviews and endorsements for influencers and reality stars can be in boosting brand-power and sales.  Many different businesses also know how difficult it can be to effectively reach younger audiences in a cost-effective way.  It makes sense, therefore, that influencers who can promote test and trace among the young in a positive way and in a way that stresses its ease, convenience and social responsibility is likely to be a good tactic. Businesses in the hospitality sector, for example, have been particularly affected by pandemic restrictions and they are likely to support any intelligent moves to make going out much safer.

Aside from its promotion, however, questions are still being asked about how far people are having to travel to even get a test, how well the test and trace service is operating, where bottlenecks are, and how accurate capacity and testing figures are.

Featured Article – Just What Is The IoT?

With a vast and growing number of business, industry, consumer and civic IoT devices and systems now being used, we look at their advantages, the threats to the IoT and how we move forward in a way that maximises the benefits and security.

The Internet of Things (IoT)

IoT devices are those devices that are now present in most offices and homes that have a connection to the Internet and are, therefore, ‘smart’ and inter-connected. These devices, each of which has an IP address, could be anything from white goods and smart thermostats to CCTV cameras, medical implants, industrial controllers and building entry systems.

IoT devices transmit and collect data which can be processed in data-centres or the cloud.  IoT devices use several different communications standards and protocols to communicate with other devices.  These include Wi-Fi, Bluetooth, ZigBee (for low-power, short-distance communication) or message queuing telemetry transport (MQTT).

Categories

The IoT can be categorised as the consumer IoT, industrial IoT, smart homes and offices and even smart cities.

Cloud

Cloud providers also provide IoT platforms that allow IoT devices and gateways to connect with the applications used to deal with the IoT data, coordinate IoT systems and help with their functionality.

Billions

Estimates on the growing number of IoT devices vary but there is thought to be anywhere between 30 and 50 billion IoT devices worldwide which could generate more than 4 zettabytes of data this year. 

The Advantages of the IoT

Devices and systems that are ‘smart’ i.e. have an internet connection have several key advantages including:

– Data can be gathered from IoT devices that can be used to improve design, operation, security and more. This can help to create new opportunities and launch new, improved products.

– They can be updated and even patched remotely and quickly without requiring physical parts to be replaced.

– Customer interaction and engagement with the product and the brand can be increased by having a smart function.

– Companies can use IoT technologies to reduce their operational costs e.g. by helping to track and monitor equipment and reduce downtime, predict errors, and reduce power consumption.

IoT Security Risks

The risks are that the Internet connection in IoT devices can, if adequate security measures are not in place, provide a way in for hackers to steal personal data, spy on users in their own homes, or remotely take control of devices in order to misuse them.

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cyber-criminals, the IoT devices are wide open to being tampered with and misused.

Also, the fact that IoT devices are so prevalent and are often overlooked in security planning (and are therefore likely left unguarded) means that they are vulnerable to hacks and attacks.

Another big risk is that IoT devices are deployed in many systems that link to (and are supplied by) major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

“Shadow IoT” devices i.e. connected to corporate networks without the knowledge of IT teams, also now pose a threat to organisations by allowing attackers a way to get into a corporate network. These devices can include fitness trackers, smartwatches, and medical devices.

Real-Life Examples

A poll by Extreme Networks of 540 IT professionals in the U.S, Europe and the Asia Pacific regions found that 70 per cent of companies who said they employed IoT devices were aware of successful or attempted hacks.

Hacks of IoT devices do not just happen to businesses.  With so many IoT devices being present in the modern home we are all now at risk. Some real-life examples of IoT hacking include:

– Hackers talking to a young girl in her bedroom via a ‘Ring’ home security camera (Mississippi, December 2019).  In the same month, a Florida family were subjected to vocal, racial abuse in their own home and subjected to a loud alarm blast after a hacker took over their ‘Ring’ security system without permission.

– In May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee.

– Back in 2017, researchers discovered that a sex toy with an in-built camera could also be hacked.

– In October 2016, the ‘Mirai’ attack used thousands of household IoT devices as a botnet to launch an online distributed denial of service (DDoS) attack (on the DNS service ‘Dyn’) with global consequences.

2020 Hacks

Examples of how some bigger IoT systems and devices have been attacked this year include:

– In February, there were reports that a vulnerability in over 2,300 smart building access systems was being exploited by attackers to launch DDoS attacks.

– In May, supercomputing systems in the UK, Germany, and Switzerland were targeted and infected with cryptocurrency mining malware.

– Also in May, a new form of malware called Kaiji was found to have been used to target IoT devices and Linux servers to make them part of a botnet that could be used for several different types of DDoS attacks.

IoT Security Legislation on the Way

In January this year, the UK government’s Department for Digital, Culture, Media and Sport (DCMS), announced that it is preparing new legislation to enforce new standards that will protect users of IoT devices from known hacking and spying risks.

IoT Household Gadgets

This commitment to legislate leads on from last year’s proposal by then Digital Minister Margot James and follows a seven-month consultation with GCHQ’s National Cyber Security Centre, and with stakeholders including manufacturers, retailers, and academics. 

The proposed new legislation will improve digital protection for users of a growing number of smart household devices (devices with an Internet connection) that are broadly grouped together as the ‘Internet of Things’ (IoT).  These gadgets include kitchen appliances and gadgets, connected TVs, smart speakers, home security cameras, baby monitors and more.

In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

The proposed new legislation will be intended to put pressure on manufacturers to ensure that:

– All internet-enabled devices have a unique password and not a default one.

– There is a public point of contact for the reporting of any vulnerabilities in IoT products.

– The minimum length of time that a device will receive security updates is clearly stated.

Challenges

Even though legislation could make manufacturers try harder to make IoT devices more secure, technical experts and commentators have pointed out that there are many challenges to making internet-enabled/smart devices secure because:

  • Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down – sell now and worry about security later.
  • Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.
  • With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such, these devices are likely to remain available to be used by cyber-criminals for a long time.

Looking Ahead

The IoT brings many advantages to businesses in terms of cost savings, the gathering of valuable data, monitoring and management. For consumers, smart devices deliver new levels of value-adding functionality and looking ahead, towns and cities will begin to rely even more on the benefits of IoT devices and systems.

The vast number of IoT devices, many which go unnoticed or fall outside of realistic risk assessments and/or still contain known weaknesses and vulnerabilities mean that there are big concerns about IoT security and privacy going forward. 

New legislation could mean that manufacturers in some parts of the world are more motivated to pay greater attention to the security and labelling of IoT devices although there is still some way to go.  That said, smart systems combined with other technologies such as AI and cloud technologies look like providing more opportunities for businesses in the future.

Tech Tip – Sync Sticky Notes Across Devices

If you are using Windows 10 and need some simple, handy reminders about work, appointments, calls and more, synced across all your devices and other apps, Microsoft’s Sticky Notes app can help. Here’s how it works:

– Open the Sticky Notes app (type Sticky Notes in the Start menu).

– When you first launch Sticky Notes sign-in to your Windows Account (as invited by the on-screen message). This will enable the syncing of your Sticky Notes between other devices on the account.

– Click on the + link to type a note which is then automatically stored in the Sticky Notes History.  Your notes can then be clicked on to re-open, edit, formatted and more.

– Notes can also be synchronised to the Cloud by going to the History window, clicking on the Settings icon and signing in with your Microsoft account.

– If you move to another device with recent Windows updates (from 10 October 2018) installed you should be able to see your stored, synched Sticky Notes.

Tech Tip – Turn Your Phone/iPad Into A HD Webcam

Whilst availability of peripheral-hardware for remote-working may be less of an issue now that it was a just a couple of short months ago (when Covid-19 took off and caused a shortage of equipment) remember that if you are struggling for a webcam to use at home or in the office, don’t overlook your trusty phone or IPad, which can come to the rescue at a pinch.

With various apps available, you could further extend the functionality to be used as a baby monitor, spy-cam, security camera or even a pet-cam, depending on your requirements – just make sure the app comes from a trustworthy source.

Ritz Roasted

Some diners with bookings at the Ritz Hotel were reportedly targeted by phone scammers who posed as hotel staff to steal credit card details.

What Happened?

The ID spoofing attack involved the fraudsters pretending to be hotel staff, phoning people who already had a dining reservation at the Ritz and asking them to confirm their credit card details, or saying that their card had been declined and asking for a second bank card.

It has been reported that the telephone calls from the scammers were made to appear to come from the Ritz telephone number and that the scammers knew the correct booking details of diners.

It remains unknown exactly how the scammers obtained the details, and the incident and possible data breach was reported by the Ritz to the Information Commissioner’s Office (ICO).

Tried To Spend At Argos

It has been reported that the scammers used the details stolen from a Ritz diner’s card to attempt to buy over £1,000 of goods at the catalogue retailer Argos. When the victim’s bank noticed the transaction, the scammers then phoned the victim, pretending to be the bank, asking for a security code that had been sent to her mobile phone that would enable the cancellation of the Argos transaction.  In fact, the code would have enabled the authorisation of the transaction and the subsequent theft.

The Ritz

The Ritz has reported that the scam took place on 12 August and has emphasised that its team would never contact diners with reservations by telephone to request credit card details to confirm a booking.

Protection

ID scams and social engineering attacks are becoming more popular and there are measures that can be taken to avoid being scammed.  To avoid being scammed in this way, assume that restaurants (certainly banks) and other businesses will not call to confirm payment details or request authorisation codes. If such a call is received, don’t give any information, end the call and call the company back through the official numbers that you have on any official bills/statements (or the back of your payment card for the bank) or on the company’s main, official number that you have obtained yourself.  Report the call to the company, Action Fraud and the ICO.

What Does This Mean For Your Business?

In this case, the victims were influenced by the apparent legitimacy of the calls due to the correct details of their booking, the same/similar phone number, the convincing nature of the caller, and perhaps the fact that dining at the Ritz is not a regular occurrence and, therefore, booking processes are unfamiliar.  The scammers also had the benefit of the influence of the brand and the need of victims to avoid the discomfort of embarrassment after being told their card had been declined.

This story shows how scammers can quickly, ruthlessly and effectively exploit and leverage a data breach, and is a lesson to customers to always be suspicious of calls from companies about payment details, and to businesses to give data protection a high priority, even with fluid systems that are in regular daily use. This story illustrates how data breaches can damage brands through bad publicity and a potential loss of customer confidence.

Featured Article – Digital Addiction

Making software and platforms that have engaging (but addictive) elements to them have become popular. However, is this really the route that businesses want to go down with their software?

Digital Addiction

‘Digital addiction’ is a controversial term that is used to broadly describe sets of compulsive behaviour which relate to the use of digital technology mobile phones, social media, and the Internet.

It is thought that a recent generation which has grown with the Internet and smartphones (many of whom have also grown up playing online games and turning to social media to satisfy psychological needs related to ‘self’) may be particularly susceptible to the tactics employed by (mainly consumer) software developers to engage them and keep their attention.

Statistics suggest that british adults check their smartphone devices every 12 minutes (Ofcom 2019) while 44 per cent of 5 to 16-year olds feel uncomfortable without a phone signal (Childwise), helping to illustrate our reliance on technology, but don’t necessarily indicate addiction.

As many psychology commentators would agree, addiction is widely regarded as an illness with a set of behavioural symptoms. There is also some evidence to suggest that there are differences in the prefrontal area of the brain itself, the part that is associated with remembering details, attention, planning, and prioritizing tasks, that may be partly responsible for the type of behaviour that is exhibited by those with a digital addiction disorder e.g. everyday life tasks taking a back seat to the Internet or other digital areas.

Not As Bad?

Ways in which digital addictions appear to attract a different perception to other addictions are that it provides social and connecting elements rather than isolating the person, it mainly involves legal activity (unlike purchasing illegal drugs), and is regarded in the public perception as being less harmful and therefore, ‘addicts’ don’t attract the same prejudices as drug addicts, gamblers or alcoholics.

Types of Addiction

Examples of different types of digital addiction include:

Phone addiction. This is where smartphone users overuse their phones to the point where it has a negative impact on their daily lives and, as such, could be described as a dependence syndrome and a clinical addiction.

Social media addiction. Spending too much time on Facebook, Twitter, Reddit, Instagram or Snapchat (and more) is now the subject of research. The continuous physical checking of Facebook could be regarded as a sign of addiction and much has now been written about how a need for likes (approval from peers) and the pressure of wanting to appear to lead a life that is as ‘interesting’ as peers are some of the drivers.

Researchers have focused on how, when a social media user receives rewards in the form of likes and messages, this activates dopamine-producing areas of the brain causing dopamine levels to rise and the person to feel pleasure.  In this sense, excessive social media users have something in common with users of addictive substances.  Social media, therefore, provides multiple immediate rewards (attention from others) for minimal effort, leading the brain to ‘rewire’ itself based on this positive reinforcement, and making people desire likes, retweets, and emoticon reactions.

Internet addiction. Also called Compulsive Internet Use (CIU), Problematic Internet Use (PIU), or iDisorder, Internet addiction has been described as an impulse control disorder and can lead to a perceived blurring of the line between the real and virtual worlds. It can also lead some people into overspending through activities such as online shopping.

Making Software More Addictive

Social media and other online platforms and businesses compete for our time and attention. With this in mind, particularly with consumer rather than business-focused services, methods can be used to increase attention and engagement and to appeal to our innate enjoyment of play. One key example is gamification:

Gamification

Gamification and leader-boards bring game design principles and mechanics to non-game environments and make technology inviting by encouraging users to carry out desired behaviours, showing the path to mastery, and by appealing to a natural need to compete.  The motivation of online rewards can take the form of points and badges.  Gamification is an example of what some consider to be an ‘addictive’ element of software.

One obvious way in which gamification feeds (an existing) addiction is with problem gamblers/gambling addicts who find that smartphones can exacerbate their gambling addiction by removing many of the barriers that would have limited their gambling offline. This, however, is business-to-consumer rather than business-focused software.

Business Software and Gamification

Although business-focused software can include gamification to give it a ‘fun’ element, and to encourage users to behave/change their behaviour in a certain way, it needs to be used intelligently and in a way that allows naturally less competitive people to stay engaged and opt-out of competitive elements where necessary.

Rather than simply using methods such as gamification to make users spend more time on certain software, many believe that business-focused software should be doing the opposite by enabling the user to be more productive.

Examples of how elements of gamification can work in a business software setting include:

– Ranking contact-centre staff by how many calls they complete.

– Staff having a software system to rate each other on how helpful they are e.g. being awarded points by colleagues.

Keeping The Focus

Ultimately, business software should be inclusive and enable greater productivity.  It can have a fun element that is in keeping with company culture but is unlikely to be successful or value-adding if it aims to be simply ‘addictive’.

Looking Forward

Good business software should involve creating innovative and engaging elements and can involve game elements that are positive motivators in the right direction according to the company’s/organisations aims, goals and culture. Other points to remember about how good (rather than just how addictive) business software can be made, and how business software-management can be improved include:

– Setting organisational time limits e.g. French companies with 50+ people must negotiate with staff over the responsibility to check emails outside working hours.   

– Pausing the delivery of emails to staff from managers outside work hours can also be helpful in relieving stress and improving effectiveness and efficiency.

– Giving staff easy-to-use rather than complicated software can prevent them from using their own (perhaps unauthorised) alternatives, stop them clinging to old/earlier versions, and can improve productivity.

– Including helpful, gentle nudging and prompting elements in the software to gently encourage behaviour (e.g. deadline warnings).

Featured Article – Huawei: A Ban in the Balance

North America has already banned US companies from working with Huawei so with that in mind and with a decision by the UK about Huawei’s involvement in the country’s 5G infrastructure due very soon, we take a closer look at the issues involved.

5G

The UK has been awaiting the development of the 5th generation of mobile broadband infrastructure for a long time.  Most carriers currently use low-band spectrum or LTE, which provides great coverage area and penetration yet it is getting very crowded and peak data speeds only top out at around 100Mbps. 5G, on the other hand, offers 3 different Spectrum bands.  More frequencies, faster speeds and less latency should mean big improvements in broadband (particularly commercial) and an end to slowdowns during busy times of day that have been experienced due to the overcrowding of the current limited LTE.

Rumblings

The first rumblings about Huawei’s alleged security threat can be traced right back to 2001, although this was an allegation from India’s intelligence agencies that Huawei was helping the Taliban.

Following a Cisco lawsuit against Huawei in 2003 over the alleged copying of intellectual property (copying of software and violation of patents), concerns were raised in 2007 over whether a venture between Cisco rival 3Com and Huawei should be permitted due to a perceived lack of transparency in Huawei.

In 2010, more alarm bells started ringing and a Cyber Security Evaluation Centre (HCSEC) was opened in Banbury, where Huawei products and equipment were tested for security holes. The factory-style centre was set up as a partnership between Huawei and the UK authorities to make sure that the UK’s telecoms infrastructure is not compromised by the involvement of Huawei.

More Recently

The source of the more recent concerns goes back to 2012 when a US House of Representatives Intelligence Committee report flagged-up the potential for Chinese state influence from both Huawei and ZTE. 

Fast-forward several years, and several further allegations, including those arising from WikiLeaks, and the arrival of President Trump have put Huawei in the spotlight.  In summer 2018, the ‘Five-Eyes’ espionage chiefs from Australia, Canada, New Zealand, the U.K. and the U.S. agreed at a meeting to contain the global growth of Chinese telecoms company Huawei (the world’s biggest producer of telecoms equipment) because of the threat that it could be using its phone network equipment to spy for China. 

From here, bans on Huawei Technologies Ltd. as a supplier for fifth-generation networks equipment followed in the US, Australia, and New Zealand, and Meng Wanzhou, the chief financial officer of Huawei, was detained in Vancouver at the request of U.S. authorities, for allegedly violating US sanctions on Iran. 

In 2019, the US Department of Justice (DOJ) charged Huawei with bank fraud and stealing trade secrets. 

The UK

As one of the ‘Five-Eyes’ countries, therefore, further scrutiny of Huawei and objections to its products being included in the UK’s 5G infrastructure were on the cards.

In the UK in January 2020, however, the government said that it would allow Huawei equipment to be used in the country’s 5G network, but not in core network functions or critical national infrastructure, and not in nuclear and military sites.  The UK also decided that Huawei’s equipment would only be allowed to make up 35 per cent of the network’s periphery, including radio masts.  It was also understood at the time (following the publishing of a document published by the National Cyber Security Centre, NCSC) that the UK’s networks would have three years to comply with caps on the use of Huawei’s equipment.

This led to White House chief of staff Mick Mulvaney visiting to help dissuade the UK from using Huawei’s products in phone networks.

Also, Robert Strayer, the US deputy assistant secretary for cyber and communications while on a tour of Europe, warned that allowing Huawei to provide key aspects of the 5G network infrastructure could allow China to undermine it and to have access to “sensitive data”.  Mr Strayer piled more pressure on the UK by warning that if the UK adopts Huawei as a 5G technology vendor it could threaten aspects of intelligence sharing between the US and UK.

New Sanctions From The US

The US has kept up the pressure on Huawei this year by announcing new sanctions that will stop Huawei and third-party companies that make its chips from using any US technology and software to design and manufacture products. Also, the US government has reiterated its concerns that Huawei has Chinese military backing and, as such, is a threat to national security.

New Report Could Mean A Change

Now, following the UK government recently receiving a report from GCHQ’s National Cyber Security Centre (NCSC), and in the light of the new US Sanctions, some commentators are predicting that the UK could be likely to change its mind again.  This further possible move away from Huawei could be especially likely since Prime Minister Boris Johnson has acknowledged that he would not want the UK to be “vulnerable to a high-risk state vendor”.

Looking Forward

Although the UK government now has the NCSC report, and a further move away from Huawei looks likely, a final public decision may not be announced for another two weeks, during which time Huawei has indicated that it is open to discussion.

If GCHQ’s National Cyber Security Centre (NCSC) has found legitimate reasons why Huawei’s products pose a security (and diplomatic) risk as part of the 5G network’s periphery it is unlikely that the specific details will be revealed, and the UK will have to find alternative suppliers.  Tensions are already high between the UK and China over Hong Kong and bad news about Huawei certainly will not improve matters.   

Some critics have said that it appears that UK policy is being dictated by the Trump administration, but it is clear that in order for the UK to deliver on its broadband 2025 target, keep costs down, and avoid suffering the collateral damage of an argument that’s primarily between the US and China, some clever manoeuvring may be necessary. 

Competing Against Huawei

President Trump’s administration is reported to have met with major US communications networking companies in a bid to address the need for improved competition with Huawei globally.

Huawei Issues

Many of the issues and incidents that have led to this point, where the Chinese communications company Huawei appears to be a focus for much criticism by the Trump administration include:

– The belief that Huawei has close ties to the Chinese state.  For example, back in July 2018,  espionage chiefs from Australia, Canada, New Zealand, the U.K. and the U.S. (the so-called ‘Five-Eyes’), agreed at a meeting to contain the global growth of Chinese telecoms company Huawei (the world’s biggest producer of telecoms equipment) because of the threat that it could be using its phone network equipment to spy for China.  This led to the US, Australia, and New Zealand barring Huawei Technologies Ltd. (with Japan more or less joining the ban) as a supplier for fifth-generation networks.

– The detention of Meng Wanzhou, the chief financial officer of Huawei, in Vancouver at the request of U.S. authorities in 2018 for violating US sanctions on Iran. 

– An apparent ongoing US trade war and war of words with China which has been exacerbated by President Trump’s assertions that COVID-19, which he has described by President Trump as “Kung flu” at a recent Tulsa rally, originates in China.

– Back in January 2019, Apple’s CEO, Tim Cook, issued a revenue warning for this quarter to investors, pointing to challenges in China as being one of the main downward driving forces. The challenges included stiff competition from Huawei, Xiaomi, and Oppo in China.

– The banning by the Trump administration since May 2019 of US companies working with Huawei.

Meeting

The reported recent meeting between the Trump administration and networking company Cisco was allegedly to discuss the possible acquisition of Ericsson and Nokia, and any possible matters relating to tax breaks and financing for those companies.

This meeting is reported to have taken place following the cancellation due to the COVID crisis of meeting about 5G that was due to have taken place in April, and may have included the likes of Nokia, Ericsson, Dell, Intel, Microsoft and Samsung.

What Does This Mean For Your Business?

Meetings with technology companies are not exceptional but it is clear that Huawei, its alleged links with the Chinese state, wider issues with China in general, and how the US government can help US tech companies compete and maintain national security are still big issues on the agenda, despite the ravages of COVID-19.

In the UK, the government and security commentators have also voiced concerns about the prospect of Huawei being involved in the 5G network and a decision on the matter is due to be announced within the next fortnight.  Huawei has said the US sanctions are “not about security, but about market position” and China’s ambassador to London has said that banning Huawei from the UK’s 5G infrastructure would send a “very bad message” to Chinese companies. The UK is currently involved in another very public argument with China over a possible 3 million Hong Kong residents being offered a path to UK citizenship.

For UK businesses, however, it’s more of a case of wondering how soon the UK will be able to offer reliable 5G at the right price across most of the country so that UK businesses are not at a competitive disadvantage with overseas businesses.