New Windows File Recovery Tool Resurrects Deleted Files

Microsoft’s own Windows File Recovery tool allows users to use a command-line app to bring back a variety of file-types that may have been mistakenly deleted, formatted, or have become corrupt.

File Recovery

Microsoft’s File Recovery app, which is free from the Microsoft store and requires Windows 10 build 19041 or later, is able to recover lost files that have been deleted from a  local storage device (including internal drives, external drives and USB devices) and can’t be restored from the Recycle Bin.

Situations

The File Recovery app is useful in a variety of situations in addition to accidental deletion of a file, including if a user has wiped clean their hard drive or needs to recover corrupted data files.  

Where recovery of valuable personal files/personal data is concerned, for example, the app can help recover photos, documents, videos and more. This could include recovering files from a camera or SD card using the app’s ‘Signature’ mode or recovering files from a USB drive.

File Systems

The Windows file storage ‘New Technology File System’ (NTFS) is, of course, the default that the tool is designed for i.e. Computers’ (HDD, SSD) external hard drives, flash, or USB drives (> 4GB).  The tool also supports ReFS for Windows Server and Windows Pro for Workstations, and FAT and exFAT (SD cards, Flash or USB drives of < 4GB).

File Types

The types of files that the tool can be used to recover includes ASF (wma, wmv, asf), JPEG (jpg, jpeg, jpe, jif, jfif, jf), MP3, MPEG (peg, mp4, mpg, m4a, m4v, m4b, m4r, mov,  3gp, qt, PDF, PNG and ZIP.

When All Else Fails

The File Recovery app, therefore, provides something to turn to when a simple trip to the recycling bin is not possible and when the situation is more challenging.

Previous Versions

It should be noted that Microsoft’s ‘Previous Versions’ feature in Windows 10 also allows for the recovery documents that have been deleted, but this feature has to be enabled first in the File History feature that is disabled by default.

What Does This Mean For Your Business?

Business and personal data files are valuable and should be protected and securely backed-up anyway as part of security and privacy procedures. However, there may well be situations where accidental deletions or corruption of important files occur and this app provides a reassuring way to ensure that these valuable files are not lost forever. The fact that the app supports a wide variety of popular file types used by businesses, and that it’s free could make this a handy little lifesaver.

Featured Article – Facial Recognition Backlash

The recent killing of George Floyd by U.S. police appears to have been the catalyst for a backlash by tech companies such as Amazon and Microsoft who are banning the police from using facial recognition software until more regulations are in place.

Problems

Whilst facial recognition technology has benefits in terms of its possible impact in quickly identifying the perpetrators of crimes and as a source of evidence, privacy organisations argue that facial recognition technology (FRT) systems infringe privacy rights.  Also, the deployment of the technology is thought by many to be too far ahead of the introduction of regulations to control its use, and there is evidence that systems still contain flaws and bias that could lead to wrongful arrest. 

For example, in the UK:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns have also been raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks to curb the spread of COVID-19.

In the EU:

Back in January, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

In the U.S.

In 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s Rekognition software was racially biased after a trial in which it misidentified 28 black members of Congress.

In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Historic Worries by Tech Companies

Even though big tech companies supply facial recognition software such as Amazon (Rekognition), Microsoft and IBM, some have not sold it to police departments pending regulation, but most have also had their own concerns for some years.  For example, back in 2018, Microsoft said on its blog that “Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses”.

Temporary Bans and Distancing

The concerns about issues such as racial bias, mistaken identification, and how police may use FRT in an environment that may not be sufficiently regulated have been brought to a head with the killing of George Floyd and the protests and media coverage that followed.

With big tech companies keen to maintain an ethical and socially responsible public profile, follow-up on their previous concerns about problems with FRT systems and a lack of regulation,  and to distance themselves from the behaviour of police as regards racism/racial profiling or any connection to it e.g. by supplying FRT software, four big tech companies have announced the following:

– Amazon has announced that it is implementing a one-year moratorium on police use of its FRT in order to give Congress enough time to implement appropriate rules.  The company stressed that it had advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition and that it even though it is banning police use of its FRT, it is still happy for organisations such as Thorn, the International Centre for Missing and Exploited Children, and Marinus Analytics to use the ‘Rekognition’ FRT to help rescue human trafficking victims and reunite missing children with their families.

– After praising progress being made in the recent passing of “landmark facial recognition legislation” by Washington Governor Jay Inslee, Microsoft has now announced that it will not sell its FRT to police departments until there is a federal law (grounded in human rights) to regulate its use. Microsoft has also publicly given its backing to legislation in California stopping police body cameras from incorporating FRT.

– IBM’s CEO, Arvind Krishna, has sent a letter to the U.S. Congress with policy proposals to advance racial equality, and stating that IBM will no longer offer its general-purpose facial recognition or analysis software. The letter stated that IBM opposes and will not condone uses of facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with its “values and Principles of Trust and Transparency”. The company says that it is now a “time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

– Google has also distanced itself from FRT with Timnit Gebru, leader of Google’s ethical artificial intelligence team, commenting in the media about why she thinks that facial recognition is too dangerous to be used for law enforcement purposes at the current time.

Looking Forward

Clearly, big tech companies that have been at the forefront of new technologies that are still in their early stages of trials and deployment face a difficult public balancing act when the benefits of those technologies are overshadowed by the flaws or by how agencies who purchase it behave and use it.  Tech companies such as Google, Amazon, Microsoft, IBM, and others must protect their brands, their public values and need to reflect the views, right-thinking people. The moves by these companies may push forward the introduction of regulations, which is likely to beneficial, and the hope among users of the services of these tech companies, as we are assured by the tech companies is the case, is that it is real ethical and social justice beliefs that are the key drivers in these announcements.

NatWest’s Extra Layer of Behavioural Biometrics Security

In partnership with Visa, NatWest has added an invisible layer of behavioural biometrics as part of an authentication process that will enable compliance with a new EU regulation.

Which Regulation?

The Strong Customer Authentication (SCA) regulation, which is part of the EU’s Payment Services Directive 2 (PSD2) comes into force in 2021. The SCA regulation is intended to the improve security of payments and limit fraud by making sure that whoever requests access to a person’s account or tries to make a payment, is the account holder or someone to whom the account holder has given consent.

The new rules from the EU Payments Services Directive (PSD2) mean that online payments of more than €50 will need two methods of authentication from the person making the payment e.g. password, fingerprint (biometric) or a phone number. This also means that online customers will not be able to check out using just a credit or debit card but will also need an additional form of identification.

For normal ‘card present’ situations (not online) contactless will still be OK for ‘low value’ transactions of less than €50 at point-of-sale and Chip and PIN will still be suitable for values above €50.

Behavioural Biometrics

Since biometrics can be accepted as one of the methods of authentication to comply with the new rules, NatWest has been working with Visa on behavioural biometrics technology.  This technology uses uniquely identifying and measurable patterns in human activities as a means of authentication and in this case, it will involve monitoring how an individual interacts with a computing device when buying online.

The kinds of patterns that the technology can monitor and measure as a means of verification include keystroke dynamics, voice ID, mouse use characteristics, signature analysis and more.  For example, behavioural biometric technology could be used to recognise the way a person types e.g. the weight or length of key presses.

NatWest and Visa

NatWest is already reported to have completed a successful trial of behavioural biometrics. Visa is reported to have already been using behavioural biometrics for fraud prevention. The work between both organisations will see the technology being used as a second layer of security that is compliant with PSD2 and SCA.

What Does This Mean For Your Business?

Businesses and banks would both like to find a way for customers to pay that is as frictionless as possible, and yet highly secure.  Behavioural biometrics can achieve this because it works in the background and does not ask a user to do anything, thereby reducing end-user friction and making it easier and faster for businesses at the checkout point.

Due to COVID-19, however, in the UK, the FCA has announced that to help merchants who have been severely affected by the crisis, the enforcement of SCA has been delayed until 14th September 2021. Many businesses are currently struggling to make sure they survive, and although it’s good news that an extra form of compliant, frictionless authentication looks likely to be available in time via NatWest (maybe others to follow), the focus, for the time being, is likely to be keeping the lights on.

Facial Recognition, Photo Identity and Privacy Protection

With phone cameras, surveillance cameras with facial recognition seemingly everywhere and the world entering a new phase of social change, many people are looking at how they can take simple steps to retain and protect their privacy rights.

Faces

As enshrined in data protection laws, such as GDPR, and with biometrics now being used widely, our faces are part of the personal data that we need to protect. Concerns, such as those expressed by the ICO’s head, Elizabeth Dunham, that police facial recognition systems have issues including accuracy are the reason for many to be looking at ways to protect themselves where necessary.

Public trust in facial recognition systems also still has some way to go as the technology progresses from what is now a relatively early stage.  For example, the results of a recent survey released by Monash University in Australia showed that half of Australians believe that their privacy is being invaded by the presence of facial recognition technology in public spaces.  Also, in the U.S., government researchers of the National Institute of Standards and Technology (NIST) have said (in May 2020) that not enough is being done to engender trust in any decisions made by facial recognition and biometrics systems, and in Europe in January, the European Commission was considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use could be put in place.

Protest Example

In a democracy such as the UK, protests are allowed take place for any number of issues, and the recent protests over the killing of George Floyd and in support of Black Lives Matter have brought into focus how to protect personal data and identity while exercising democratic rights.

For example, those wishing to obscure faces in their own protest photos that they share often use software to paint over faces, or use a mosaic blur technique because these cannot be reversed, rather than a simple blur effect which it is possible for authorities to de-blur using new neural networks.

This process of blocking out faces in photos can be carried out using the built-in photo editor on a smartphone.  For example:

– On iOS, open Photos, tap on the photo, select Edit (top right), tap the three dots to access Mark-up and use solid circles or squares to block out faces.

– On Android (using the native Mark-up tool), in the Photos app, select the photo, tap on Edit (bottom, second left), select Mark-up (bottom, second right), and block out faces e.g. using the Pen tool.

Removing Metadata

Removing the photo’s metadata (data stored in phone photos e.g. type of device and camera, date, time, location) can be achieved by taking screenshots the photos, and making sure that there are no other identifying features in the screenshot.

Masks and Facial Recognition

Tech and news commentators have noted recently how mask-wearing during the COVId-19 pandemic has proven to be a challenge for facial recognition systems, although it has also been suggested that AI facial recognition systems have now had the chance to have more ‘training’ in being able to identify mask-wearing people correctly.

What Does This Mean For Your Business?

Facial recognition (if used responsibly as intended) can help to fight crime in towns and city centres, thereby helping the mainly retail businesses that operate there, although there are still questions about its accuracy and its impact on our privacy and civil liberties.

Where sharing photos and worries about privacy is concerned, there are apps in place on smartphones that allow faces to be blocked out.  Also, when on Facebook, for example, not using a close up / clear photo of your face as a public profile picture, or revealing too much about where photos were taken, as well as not geotagging or posting photos that reveal your address or show valuable items at your home / where you keep valuables are also steps that can be taken to help retain your privacy and security.  Photos taken in the workplace, particularly those posted on websites and social media should also be vetted to ensure that there are no implications for physical security and that staff featured are happy to have the photo shared.

Featured Article – A Look At Cookies

Cookies perform functions and provide information that helps website users, businesses, publishers, and advertisers. This article looks at what cookies are, what they do, and the legislation that affects how they are used.

What Are Cookies?

Cookies are text files sent by the website you are on and stored on your browser as a record of your activity on the site. Although most websites use cookies, cookies do not harm devices and cookies do not tell websites who a user is or gather personal details about website visitors.

Current EU legislation states that all websites must let people know when cookies are in use. Website visitors should also be given the option to accept cookies or not and should be allowed to browse a website and experience the functionality even if they choose not to accept the cookies.

What Are Cookies For?

Cookies are supposed to help users to access a website more quickly and easily by telling a website that a visitor has been there before.  For example, cookies can store information that allows a repeat visitor to access a website without logging in, or fill in a form (autofill) without a person having to type all the details in. Cookies can also provide information to help with website shops, analytics and can help advertisers. 

Types of Cookies

There are several different types of website cookies. These include:

– First-party cookies. These are set by the website and are used for analytics data gathering (for analytics tools) e.g. the number of visitors, page views, pages visited, and sessions. These cookies provide data to publishers and advertisers for ad targeting.

– Third-Party Cookies. These cookies are used when other, third-party elements e.g. chatbots or social plugins have been added to a website. These cookies, set by domains, can track users, and save data that can be used in ad targeting and behavioural advertising.

– Session cookies, as the name suggests, are temporary, short-lived and expire immediately or shortly after a user leaves a web browser. They are commonly used by e-commerce websites to remember the items have been placed in the shopping cart, to keep users logged in, and to record user sessions to help with analytics.

– Persistent Cookies. These cookies must have a built-in expiration date but can stay on a user’s browser for years (or until a user manually deletes them) in order to track a user and their interaction with a website over time.

– Secure Cookies. Websites with HTTPS set secure cookies. These cookies have encrypted data and are used on payment/checkout pages of e-commerce websites or online banking websites.

What Is The ‘Cookie Law’?

The so-called ‘cookie law’, which began life as an EU Directive, is privacy legislation that requires websites to ask visitors for consent to store or retrieve information on a computer, smartphone, or tablet.

The Cookie Law was widely adopted in 2011, became an update to the UK’s Privacy and Electronic Communications Regulations, and was designed to make people aware of how the information about them is collected online and to give them the opportunity to say yes or no to it. 

The introduction of the General Data Protection Regulation (GDPR) in May 2018 with its focus on ensuring that businesses are transparent and protect individual privacy rights means that businesses must be able to prove clear and affirmative consent to process personal data and people must be able to opt-in rather than opt-out.  These aspects have clear implications for cookies.

GDPR Cookie Consent

GDPR requires consent to be gathered from data subjects and the Court Justice of the European Union rules state that this must consent must be explicit.  This means that a website’s users must be presented with a consent banner that is explicit and cannot have pre-checked boxes giving consent on categories of cookies except for those deemed strictly necessary.  Websites using cookies other than those that are strictly necessary for its basic function must present a method for obtaining the cookie consent of users prior to any collection or processing.

Website visitors must also be able to withdraw the consent that they have given before, in a way that is accessible, if they choose to. Also, the data controller must delete any personal data of individuals if that data is not necessary for the original stated purpose.

GDPR Cookie Compliance

One of the key ways in which a business can remain GDPR compliant is to make sure that it obtains prior consent if it provides service or collects personal data about persons in the EU. This means being very clear and explicit in describing the extent and purpose of the data processing in language that is easy-to-understand language to the user, before gathering any personal data from that user. Website users must be able to find out what type of personal data is being collected about them on a website at any time, and it should be easy for users to withdraw consent that has been previously given.

For this to happen, businesses and organisations need to know what kinds of cookies are used by their website and why. This information can be addressed in a cookie policy.

CCPA

For those businesses and organisations worldwide, that handle the personal information of any California residents, they will need to also ensure that their data processing (including cookie use) is compliant with the new California Consumer Privacy Act (CCPA).

A Cookie Policy

Companies and organisations are legally required under GDPR (and CCPA) to make a cookie policy available on their website to users. This cookie policy, which can be included as part of a website’s privacy policy, should be a declaration to users about what cookies are active on the website, what user data is being tracked by those cookies, for what purpose, and where in the world this data is sent.  This cookie policy must also give information about how users can opt-out of the cookies or change their settings regarding the cookies on the website.

Awareness and Challenges

Strengthening of data protection laws in recent years has, therefore, forced businesses to become very familiar with aspects of how they manage data in order to be legally compliant.  This has led to a much greater awareness of cookies and their use and for first-time visitors to a website, cookie consent is the first thing they encounter.

Also, changes that have led to many browsers blocking third party cookies have presented marketing and monetary challenges to publishers and advertisers.

Beware Fake Contact Tracer Messages

Just as you thought that cybercriminals had exploited every aspect of the pandemic with phishing, vishing, smishing and more, there are now warnings to beware of fake contact tracer messages.

Contact Tracing in the UK

Here in the UK, NHS contact tracers are now contacting those persons who are believed to have been in close contact with those who have tested positive for COVID-19.  The system works by those who test positive filling in a form (while they are well enough to do so) detailing where they have been plus when and who they have been in contact with.  From there, the NHS tracer contacts those who are believed to have been in close contact (via phone or text) and asks them to self-isolate for 14 days, the period by which symptoms of an infected person should have shown. Close contact is defined as face-to-face contact/close proximity for more than 15 minutes. 

This contact tracing service has been put into place before the app, which is designed to automatically do the same thing but has not been released yet.

Scam Messages

The type of scam messages that have already been observed by many people was highlighted by Stuart Fuller, Chairman of Lewes Football Club.  On his Twitter page, Mr Fuller shared a screenshot of a text message from the fraudsters and warned that such messages are not genuine and that clicking on the link in the message would lead to a phishing page.

The screenshot showed a text message which had a recommendation for the recipient to self-isolate because they had been in contact with someone who had tested positive for or showed symptoms of COVID-19.  The message included a link to follow for the recipient to get more information.

How?

On his blog, ethical hacker Jake Davis highlights how the problem with the UK government using SMS during COVID-19 is that people are more vulnerable than ever to fake information and SMS messages can easily be made to look as though they come from the government.  In a blog post, Mr Davis says that making an SMS message appear to come from the government is as simple as inserting “UK_Gov” instead of some digits as the sender.

What Does This Mean For Your Business?

This and other similar types of smishing and phishing attacks are predicted to increase this year, and their success and prevalence is a sign of how vulnerable the COVID-19 outbreak it makes people feel, and how their search for and emotional reactions to information about health and financial matters are playing into the hands of criminals who are happy to exploit anyone.  Companies and organisations need to educate their staff about the threat, while businesses and individuals need to be vigilant and cautious about any unusual SMS messages or unsolicited phone calls, particularly those that offer rewards, create panic, warn of unpleasant consequences, or apply a feeling of pressure to act. Bear in mind that it is relatively easy to fake the source of a text message and although receiving such a message may at first be a shock, it is worth checking that the supposed government/NHS SMS is genuine before thinking about clicking on any links.

Featured Article – ‘Vishing’ and How to Guard Against It

‘Vishing’, or ‘phishing over the phone’ is on the rise and in this article, we look at vishing techniques and examples, and how to prevent them.

Vishing

The word Vishing is a combination of ‘voice’ and ‘phishing’ and describes the criminal process of using internet telephone service (VoIP) calls to deceive victims into divulging personal and payment data. 

Vishing scams to homes often use recorded voice messages e.g. claiming to be from banks and government agencies to make victims respond in the first instance.

The technology used by scammers is now such that voice simulation may even be used in more sophisticated attacks on big businesses. 

Vishing Vs Phishing

Phishing attacks can take different forms and can employ different combinations, such emails, bogus websites, and phone calls.  Vishing focuses on using VoIP to complete the scam and this can include using a ‘spoofed’ phone number of a real business or company to add the appearance of authenticity. 

Smishing

Smishing uses SMS text messages rather than phone calls to deceive victims into responding.

Selection

Victims are selected using large call lists where little or nothing is known about the target (‘shotgun’ attacks), or where some information is known from sources such as personal data that has come from website data breaches and perhaps from data interception data gathered from phishing and other social engineering attacks. Vishing attacks where some important data is already known by the attacker are referred to as ‘spear vishing’ attacks.

Motivation

The motivation for attackers is, of course, easy money or data which leads to the acquisition of more money, and perhaps use in further attacks on other sites which can give access to a person’s financial and personal data. In the U.S., for example, if attackers already have the first few digits of a Social Security Number, gaining the remaining numbers can give them access to many other sources of funds and data.

The motivation presented by the attacker to the target to make them part with their data is the promise of bogus rewards e.g. prizes and taking advantage of amazing limited offers, the need to avoid a negative outcome, and the need to be helpful/contribute positively to society e.g. in scams whereby a victim is asked to help police/fraud investigations.

In most cases, fraudsters use emotional manipulation, deception techniques and the illusion of limited time (act now) as ways to gain access to personal data. The internet telephone service (VoIP) calls also provide them with anonymity and flexibility that they need to target their attacks.

The Scale of the Problem

The scale of the vishing threat is now huge.  For example:

– First Orion’s 2018 Scam Call Trends and Projections Report showed that nearly 30% of incoming mobile calls were spam calls.

– The “Quarterly Threat Intelligence Report: Risk and Resilience Insights” report from Mimecast researchers warned that in 2020, “voicemail will feature more prominently” in attacks and showed vishing as becoming a likely daily occurrence in 2020.

– Proofpoint’s 2020 State of the Phish report (worldwide survey) found that 25% of workers could correctly define the term.

Examples of Vishing

Popular examples of vishing calls include:

– Calls from banks or credit card companies with messages asking the victim to call a certain number to reset their password.

– Unsolicited offers for credit and loans.

– Exaggerated (almost too good to be true) investment opportunities.

– Bogus charitable requests for urgent causes and recent disasters.

– Calls about extended car warranties.

– Calls claiming to be from fraud officers to (ironically) help people who have recently fallen victim to scams and attacks, asking people for their help in operations to catch fraudsters e.g. by transferring funds to a specified account.

– Calls claiming to be from government agencies e.g. tax office calls offering rebates or warning of an investigation.

– Tech support calls to fix bogus problems with computers.  This method can also use popup windows on a victim’s computer, often planted by malware, to issue a bogus warning from the OS about a technical problem.

– Travel and holiday company calls relating to (bogus) holiday bookings and cancellations.

– Calls relating to insurance e.g. for weddings, holidays, and flight cancellations.

– ‘One ring and cut’ (Wangiri – Japanese) calls where criminals trick victims into calling premium-rate numbers. For example, the fraudster’s system calls a large number of random phone numbers with each ringing once.  If someone calls back (replying to a missed call) they are directed to a premium rate number.

Real Examples

– In May 2018, in the North-East,  vishing calls over a three-week period resulted in the theft of £1Million by fraudsters pretending to be from their victim’s bank saying they were investigating fraudulent activity by staff within the organisation and asking victims to move large sums money into foreign accounts for safe-keeping.  This was coupled with a request that the victim did not report the call for fear of jeopardising the investigation.

– In September 2019 AI voice simulation software was used to impersonate the voice of a UK-based energy company CEO and to thereby make the company transfer £200,000 into the account of the fraudsters.

– In October 2019, Police in Derbyshire warned that scammers had called victims claiming to be “tech support representatives” from Microsoft, telling people there was something wrong with their computer and offering to fix the problem by remote access.

Government Fights Back

Earlier this month (May 2020), Her Majesty’s Revenue and Customs (HMRC) asked UK Internet Service Providers (ISPs) to remove 292 websites exploiting the coronavirus outbreak since the national lockdown began on March 23.

How To Guard Against Vishing

Ways that you and your business can guard against vishing attacks include:

– Don’t trust caller ID to be 100 per cent accurate, numbers can be faked.

– Don’t answer phone calls to unknown numbers, block numbers of spam callers, register your phone number with the Telephone Preference Services (TPS) and report any suspicious spam calls to the Information Commissioners Office (ICO).

– Beware of unsolicited alleged calls from banks, credit card companies or government agencies, particularly those asking to you to call certain numbers and/or change password details. The real organisations and agencies would not make calls of this kind.

– Include phishing, vishing, smishing and other variants with your security awareness training for employees.

– Avoid using a gift card or a wire/direct money transfer, and make sure that there is a policy and process in place for any money transfers that all employees must adhere to, even if the request appears to come from someone within the company. 

– Don’t give in to pressure; remember that you can ditch any call at any time, and give yourself the option of looking up the number of the company/agency/organisation that claims to be calling you and calling them back yourself to check.

Looking Ahead

The predictions from security researchers and commentators are that vishing, along with phishing and smishing are set to increase this year, and their success could be helped by the COVID-19 outbreak as people wait and search for information about financial and health matters, details about government payments and help, and details about cancellations e.g. holidays and flights. Companies and organisations need to educate their staff about the threat, while businesses and individuals need to be vigilant and cautious about any unsolicited phone calls, particularly those that offer rewards, create panic or warn of dire consequences, and those that apply pressure.

eBay Port Scanning Causes Alarm

Reports that eBay has been running port scans against the computers of visitors to the platform have caused alarm over potential security issues.

Port Scans

Port scanning is something that many people associate with cyber-attacks and penetration (‘pen’) testing.  Port scanning scripts are used to determine which ports a system may be listening via, by sending packets of information to a user’s machine and varying the destination port. This can help an attacker to determine what services may be running on the system and, therefore, get an idea of the operating system a target user has.

Port scanning can also be used to counter the activities of cybercriminals by scanning for remote-control access ports to detect any criminals that may be logged into a user’s computer in order to impersonate them on various platforms/sites e.g. to make fraudulent purchases.

eBay

In the recent observations of port scanning by eBay according to US-based security researcher Charlie Belmer and recorded on his nullsweep.com blog, Mr Belmer reported that eBay appeared to be looking for VNC services being run on the host (the same thing that was reported for bank sites).  The ports scanned by eBay are generally used for remote access and remote support tools e.g. Windows Remote Desktop, VNC, TeamViewer and others.

Mr Belmer has listed the 14 different ports he observed as being scanned by eBay and has concluded that the port scanning he observed being run from eBay was “clearly malicious behaviour and may fall on the wrong side of the law”.

Advice

On his blog, Mr Belmer urges anyone else who observes this port scanning behaviour to “complain to the institution performing the scans, and install extensions that attempt to block this kind of phenomenon in your browser, generally by preventing these types of scripts from loading in the first place”.

Maybe Just Fighting Fraud

Bearing in mind that there were reports 4 years ago of cybercriminals taking over users’ computers using TeamViewer to make fraudulent purchases on eBay, it may be very likely that the port scanning observed is simply part of eBay’s efforts to fight fraud by trying to detect if a compromised computer is being used to make fraudulent purchases on its platform.

What Does This Mean For Your Business?

Being an auction site, eBay clearly must take measures to ensure that fraudulent purchases cannot be made and to guard against and problems similar to those experienced with TeamViewer four years ago.  It is understandable, however, that a practice often associated with criminal activity and penetration testing may cause alarm among those familiar with the more technical aspects of Internet security. Although the matter has been reported by Mr Belmer on his blog, it is unclear yet what action or statements, if any, are likely to come from eBay.

Featured Article – Does Your Phone Have A Virus?

Phones are essentially powerful mobile computers that contain vast amounts of valuable personal information. This article looks at how to tell if your phone has a virus, what to do if you think it has, and how to protect your phone.

Virus or Malware

Both a virus and malware are malicious programs, but in security terms, a virus is a type of malware that copies itself onto your device and malware, the general terms for malicious software, is a type of threat.

Types of Mobile Malware

There are many different types of malware that can infect mobile phones, including:

– Banking malware, many of which are Trojans designed to infiltrate devices and collect bank login and passwords.

– Spyware, used to steal a variety of personal data.

– Ransomware, which locks the phone until the user pays a ransom.

– Mobile Adware, whereby “malvertising” code can infect a device, forcing it to download specific adware types which can then allow attackers to steal personal data.

– Crypto-mining apps, which use the victim’s device to mine crypto-currency. For example, in February 2019, security researchers at Symantec claimed to have discovered 8 crypto-mining apps in the Microsoft Store.

– MMS Malware, whereby attackers can send a text message embedded with malware to any mobile number.

– SMS Trojans, which can send SMS messages to premium-rate numbers across the world thereby landing the user with an exceptionally large phone bill.

Android Vulnerable To Malware From Malicious Apps

Android phones are known to be vulnerable to malicious software that usually arrives via a malicious app that the user has downloaded, sometimes via the Google Play Store or an app from a third-party app shop.  A recent Nokia Threat Intelligence report showed that Android devices are nearly fifty times more likely to be infected by malware than Apple devices.

For example, back in September 2019, Security researcher Aleksejs Kuprins of CSIS cybersecurity services company discovered 24 apps which had been available for download in the Google Play Store that contained spy and premium subscription bot ‘Joker’ malware.  Also, in January 2019, security researchers discovered 36 fake and malicious apps for Android that could harvest data and track a victim’s location, masquerading as security tools in the trusted Google Play Store.

Android phones are also vulnerable to malware and viruses if users download message attachments from an email or SMS, download to the phone from the internet, or connect the phone to another device.

Why?

Reasons why Google’s open-source Android is vulnerable to malware include:

– The complicated processes involved in the issuing of security updates means that important software security updates often get delayed.  This is because unlike Apple iPhones, there are thousands of different Android devices made by hundreds of different manufacturers, each with a range of hardware quality and capabilities. 

– The open-source nature of Android, which is also one of its strengths in terms of scope and flexibility, can also lead to more human error and potential security holes.

Apple iOS

Apple iPhones are generally thought to be much less at risk from viruses and malware because they have protections systems built-in which include:

– The need to go through the Apple App Store to download an app. Apple reviews each app for malicious code before it makes it into the store, thereby stopping an obvious method of infection.

– iOS “sandboxing” stops apps from touching data from other apps or from touching the operating system, thereby protecting a user’s contact and other personal data.

– The majority of iOS apps do not run as an administrator, thereby limiting their ability to do damage.

– Apple issues frequent updates to patch any known vulnerabilities, which everyone with a compatible device receives at the same time.

Still Targeted

Although the vast majority of viruses/malware attacks on phones affect Google’s Android phone OS (97 per cent), and viruses are rare on Apple iPhones due to the built-in security measures, they are also still targeted by cybercriminals, and vulnerabilities in iOS platforms do exist.

For example:

– Phishing attacks e.g. bogus pop-up ads are used to trick iPhone users into downloading malicious software.

– Back in August 2019 a Google Project Zero contributor reported discovering a set of hacked websites (from February 2019) that were being used to attack iPhones to infect them with iOS malware and had most likely been doing so over a two-year period.

Signs That Your Phone May Have a Virus

Some of the main signs that your phone may already have a virus/be infected by malicious software are:

– Unusual and/or unexpected charges on your phone bill e.g. additional texting charges.

– Your phone contacts reporting that they have received strange messages from you.

– The phone crashes regularly. 

– New/unexpected apps are present.

– Apps crash more often than usual.

– An increase in the number of invasive adverts on your phone (a sign of adware).

– Slowing down of the phone and poor performance.

– Large amounts of data being used, without an obvious cause.

– The battery life is noticeably reduced.

What Next?

If your phone is infected with a virus, take the following steps:

– Switch the phone to airplane mode to stop malicious apps from receiving and sending data.

– Check the most recently installed apps against the listed number of downloads (in the App Store and Google Play).  Low download numbers, low ratings and bad reviews may indicate the need to delete the app.

– Install anti-virus software and carry out a scan of your handset.

– You can also contact your phone’s service provider or visit the high street store if you think you have downloaded a malicious/suspect app

iPhones

If you suspect that your iPhone may be infected:

– Check your apps and delete any unwanted ones.

– Clear the phone’s history and data, and restart.

– Consider installing mobile anti-virus software.

Prevention

Prevention is the best form of cure, and the steps you can take to ensure that your phone is both secure and not infected with a virus include:

– Using mobile security and antivirus scan apps.

– Only using trusted apps / trusted app sources.

– Check the publisher of an app (which other apps they have created), check the numbers of installations and positive reviews before installing an app, and check which permissions the app requests when you install it.

– Uninstalling old apps and turning off connections when not using them.

– Locking phones when they are not in use.

– Not ‘jailbreaking’ or ‘rooting’ a phone.

– Using 2-factor authentication.

– Using secure Wi-Fi and VPN rather than just the free Wi-Fi when out and about. 

– Being careful with email security and hygiene e.g. monitor for phishing emails and not clicking on unknown/suspicious attachments and links.

– Being careful with security around texts, social media messages and ads.

App Developers

With apps being the source of many infections of phones, there is an argument that there is responsibility among mobile app developers and those commissioning mobile apps to ensure that security is built-in from the ground up. This should mean making sure that all source code is secure and known bug-free, all data exchanged over app should be encrypted, caution should be exercised when using third-party libraries for code, and only authorised APIs should be used.

Also, developers should be building-in high levels of authentication, using tamper-detection technologies, using tokens instead of device identifiers to identify a session, using the best cryptography practices e.g. store keys in secure containers, and conducting regular, thorough testing.

Going Forward

If you train yourself to regard your phone as another mobile computer (that probably has a lot more personal data on it) that can be targeted by cybercriminals and needs protection, and are cautious regarding apps, emails, texts and adverts, then you are less likely to end up with a damaging virus/malware program on your phone.

Are Masks A Challenge To Facial Recognition Technology?

In addition to questions about the continued use of potentially unreliable and unregulated live facial recognition (LFR) technology, masks to protect against the spread of coronavirus may be presenting a further challenge to the technology.

Questions From London Assembly Members

A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick have asked whether the LFR technology could be withdrawn during the COVID-19 pandemic on the ground that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. 

Also, concerns are now being raised about how the already questionable accuracy of LFR could be challenged further by people wearing face masks to curb the spread of COVID-19.

Civil Liberties of Londoners

The two London Assembly members argue in the letter that a lack of laws, national guidelines,  regulations and debate about LFR’s use could mean that stopping Londoners or visitors to London “incorrectly, without democratic public consent and without clear justification erodes our civil liberties”.  The pair also said that this could continue to erode trust in the police, which has been declining anyway in recent years.

Inaccurate

The letter highlights concerns about the general inaccuracy of LFR. This is illustrated by the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified.

Others Concerns

Other concerns by the pair outlined in the letter about the continued deployment of LFR include worries about the possibility of mission creep, the lack of transparency about which watchlists are being used, worries that LFR will be used operationally at protests, demonstrations, or public events in future e.g. Notting Hill Carnival, and fears that the technology will continue to be used without clarity, accountability or full democratic consent

Masks Are A Further Challenge

Many commentators from both sides of the facial recognition debate have raised concerns about how the wearing of face masks could affect the accuracy of facial recognition technology.

China and Russia

It has been reported that Chinese electronics manufacturer Hanwang has produced facial recognition technology that is 95% accurate in identifying the faces of people who are wearing masks.

Also, in Moscow, where the many existing cameras have been deployed to help enforce the city’s lockdown and to identify those who don’t comply, systems have been able to identify those wearing masks.

France

In France, after the easing of lockdown restrictions, it has been reported that surveillance cameras will be used to monitor compliance with social distancing and the wearing of masks.  A recent trial in Cannes using French firm Datakalab’s surveillance software, which includes an automatic alert to city authorities and police for breaches of mask-wearing and social distancing rules looks set to be rolled out to other French cities.

What Does This Mean For Your Business?

Facial recognition is another tool which, under normal circumstances (if used responsibly as intended) could help to fight crime in towns and city centres, thereby helping the mainly retail businesses that operate there.  The worry is that there are still general questions about the accuracy of LFR, its impact on our privacy and civil liberties and that the COVId-19 pandemic could be used as an excuse to use it more and in a way that leads to mission creep. It does appear that in China and Russia for example, even individuals wearing face masks can be identified by facial recognition camera systems, although many in the west regard these as states where a great deal of control on the privacy and civil liberties population is exercised and may be alarmed at such systems being used in the UK.  The pandemic, however, appears to be making states less worried about infringing civil liberties for the time being as they battle to control a virus that has devastated lives and economies, and technology must be one of the tools being used in the fight against COVID-19.