AI COVID Cough Detector

MIT research has used an AI model trained on cough sounds to detect an early stage COVID cough.

Identifies Cough Pattern

Those who catch COVID-19 but remain largely or completely asymptomatic (and therefore do not get a test) are proving to be a challenge in stopping the spread of the virus. With this in mind, researchers have used the knowledge that coughs can be used to help identify multiple conditions to develop an AI model that can identify the individual pattern of a COVID-19 cough and thereby, aid diagnosis.

Which Conditions?

The nature and different sounds of coughs have long been used to help give information about many different conditions. Also, recently, research published in the Lancet, which also named one of the AI COVID cough researchers as a contributor (Mar Santamaria), used automated (AI) linguistic analysis to predict future onset of Alzheimer’s.

How?

The research to identify the pattern of COVID-19 used forced-cough cell phone recordings of more than 4000 subjects to create a huge cough dataset. This dataset was then used to train an AI/machine learning model which used an adapted AI speech processing framework that could make acoustic biomarkers, thereby enabling it to tell the difference between coughs.

Results

Amazingly, when the AI model’s results were then validated with an official COVID-19 test, the model achieved COVID-19 sensitivity of 98.5 percent with a specificity of 94.2 and for asymptomatic subjects the sensitivity was 100 per cent with a specificity of 83.2 percent. This research appears to show, therefore, that it is possible for AI to pick out a COVID-19 cough from other types of cough and that, as concluded by the researchers, “AI techniques can produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19”

Next

It is understood that the research team are now working with hospitals to create an even more diverse cough dataset with plans to create an app-based diagnostic tool.

What Does This Mean For Your Business?

This shows how AI technology can be used to create unique diagnostic tools that could have a huge positive impact in tackling difficult world challenges such as the COVID-19 pandemic and how to test and screen accurately and easily. The researchers see this tool as having value in daily the screening of students, workers, and public as a way (after lockdowns) of helping businesses and transport systems to open and operate while helping to quickly (in real time) and easily and in a non-invasive way, spot spreaders and thereby potentially help economies and countries get back in control.

Image Captioning AI More Accurate Than Humans

Microsoft has announced that in tests, its new, AI-based, automatic image captioning technology is better than humans at describing photos and images.

Part of Azure AI

The new automatic image captioning model is available via Microsoft’s Azure Cognitive Services Computer Vision offering, which is part of Azure AI. Azure Cognitive Services provides developers with AI services and cognitive APIs to enable them to build intelligent apps without the need for machine-learning expertise.

Test

The test of the new automatic image captioning software, led by Lijuan Wang, a principal research manager in Microsoft’s research lab in Redmond, involved pre-training a large AI model with a rich dataset of images paired with word tags, with each tag mapped to a specific object in an image. This ‘visual vocabulary’ approach is similar to helping children to read e.g. by using a picture book associating single words with images, such as a picture of an apple with the word “apple” beneath it. Using this visual vocabulary system, the machine learning model learned how to compose a sentence and then was able to leverage this ability and fine-tune it when given more novel objects in images.

The Result

The Cornell University research paper based on this test, and published online, concluded that the model could generate fluent image captions that describe novel objects and identify the locations of the objects. The report also concluded that the machine learning model “achieved new state-of-the-art results on nocaps and surpassed the human CIDEr score.”  This means that the model achieved and beat human parity on the novel object captioning at scale (nocaps) benchmark i.e. how well the model generated captions for objects in images that were not in the dataset used to train them.

Twice As Good As Existing System

Microsoft’s Lijuan Wang has also concluded that the new AI-powered automatic image captioning system is two times better than the image captioning model that has been used in Microsoft products and services since 2015.

Five Major Human Parities

Lijuan Wang highlights how this latest AI breakthrough in automatic captioning adds to Microsoft’s existing theme of creating “human parity achievement across cognitive AI systems”.  According to her, in the last five years, Microsoft has “achieved five major human parities: in speech recognition, in machine translation, in conversational question answering, in machine reading comprehension, and in 2020, in spite of COVID-19, we got the image captioning human parity.”

What Does This Mean For Your Business?

Microsoft sees this as a ‘breakthrough’ that is essentially an extra technology tool to be added to its Azure platform so that developers can use it to serve a broad set of customers.  As highlighted by Lijuan Wang, it also sends a message to other big tech companies that are expanding their use of AI/machine learning and features at the moment e.g. Google and Amazon, that Microsoft is also making major strides in the kinds of technologies than can have multiple business and other applications, as well as being able to make existing digital search and tools more effective. Microsoft’s own chromium-based search engine, Edge, will, no doubt, be a beneficiary of this technology. This development also shows that we are now entering a stage where AI/machine learning can create tools that are at least on a par with human ability for some tasks.

Featured Article – Facial Recognition, Facial Authentication and the Future

Facial Recognition and facial authentication sound similar but there are distinct differences and this article takes a broad a look at how both are playing more of a role in our lives going forward. So firstly, what’s the difference?

Facial Recognition

This refers to the biometric technology system that maps facial features from a photograph or video taken of a person e.g. while walking in the street or at an event and then compares that with the information stored in a database of faces to find a match. The key element here is that the cameras are separate from the database which is stored on a server.  The technology must, therefore, connect to the server and trawl through the database to find the face.  Facial recognition is often involuntary i.e. it is being used somewhere that a person happens to go – it has not been sought or requested.

Facial recognition is generally used for purposes such as (police) surveillance and monitoring, crime prevention, law enforcement and border control.

Facial Authentication

Facial Authentication, on the other hand, is a “match on device” way of a person proving that they are who they claim to be.  Unlike facial recognition, which requires details of faces to be stored on a server somewhere, a facial authentication scan compares the current face with the one that is already stored (encrypted) on the device.  Typically, facial authentication is used by a person to gain access to their own device, account, or system.  Apple’s FaceID is an example of a facial authentication system. Unlike facial recognition, it is not something that involuntarily happens to a person but is something that a person actively uses to gain entry/access.

Facial recognition and facial authentication both use advanced technologies such as AI.

Facial Recognition – Advantages

The advantages of facial recognition technology include:

– Saving on Human Resources. AI-powered facial recognition systems can scan large areas, large moving crowds and can pick out individuals of interest, therefore, saving on human resources.  They can also work 24/7, all year round.

– Flexibility. Cameras that link to facial recognition systems can be set up almost anywhere, fixed in place or as part of mobile units.

– Speed. The match with a face on the database happens very quickly (in real-time), thereby enabling those on the ground to quickly apprehend or stop an individual.

– Accuracy – Systems are very accurate on the whole, although police deployments in the UK have resulted in some mistaken arrests.

Facial Recognition Challenges

Some of the main challenges to the use of facial recognition in recent times have been a lack of public trust in how and why the systems are deployed, how accurate they are (leading to possible wrongful arrest), how they affect privacy, and the lack of clear regulations to effectively control their use.

For example, in the UK:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns have also been raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks to curb the spread of COVID-19.

In the EU:

Back in January, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

In the U.S.

In 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s Rekognition software was racially biased after a trial in which it misidentified 28 black members of Congress.

In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Backlash and Tech Company Worries

The killing of George Floyd and of other black people in the U.S by police led to a backlash against facial recognition technology (FRT) and strengthened fears by big tech companies that they may, in some way be linked with its negative aspects. 

Even though big tech companies supply facial recognition software such as Amazon (Rekognition), Microsoft and IBM, some have not sold it to police departments pending regulation, but most have also had their own concerns for some years.  For example, back in 2018, Microsoft said on its blog that “Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses”.

With big tech companies keen to maintain an ethical and socially responsible public profile, follow-up on their previous concerns about problems with FRT systems and a lack of regulation,  and to distance themselves from the behaviour of police as regards racism/racial profiling or any connection to it e.g. by supplying FRT software, four big tech companies recently announced the following:

– Amazon has announced that it is implementing a one-year moratorium on police use of its FRT in order to give Congress enough time to implement appropriate rules. 

– After praising progress being made in the recent passing of “landmark facial recognition legislation” by Washington Governor Jay Inslee, Microsoft has announced that it will not sell its FRT to police departments until there is a federal law (grounded in human rights) to regulate its use.

– IBM’s CEO, Arvind Krishna, has sent a letter to the U.S. Congress with policy proposals to advance racial equality, and stating that IBM will no longer offer its general-purpose facial recognition or analysis software.

– Google has also distanced itself from FRT with Timnit Gebru, leader of Google’s ethical artificial intelligence team, commenting in the media about why she thinks that facial recognition is too dangerous to be used for law enforcement purposes at the current time.

Masks

The need to wear masks during the pandemic has proven to be a real challenge to facial recognition technology.  For example, recent research results from the US National Institute of Standards and Technology showed that even the most advanced facial recognition algorithms can only identify as little as between 2 and 50 per cent of faces when masks are worn.

Call For Clear Masks

There have been some calls for the development of clear or opaque masks or masks with a kind of ‘window’, as highlighted by the National Deaf Children’s Society, to help the 12 million people in the UK who are deaf or suffer from degrees of hearing loss e.g. to help with lip-reading, visual cues and facial expressions.  Some companies are now producing these masks e.g. the FDA-approved ‘Leaf’ transparent mask by Redcliffe Medical Devices in Michigan.  It remains to be seen how good facial recognition technology is at identifying people with a clear/opaque mask as opposed to a normal mask.

Authentication Challenges

The security, accuracy, and speed challenges of more traditional methods of authentication and verification have made facial authentication look like a more effective and attractive option.  For example, passwords can be stolen/cracked and 2-factor authentication can be less convenient and can be challenging if, for example, more than one user needs access to an account.

Facial Authentication Advantages

Some of the big advantages of facial authentication include:

– Greater Accuracy Assurance. The fact that a person needs to take a photo of their government-issued ID photo e.g. from a passport to match with a selfie + embedded 3D likeness detection to set up their facial authentication on their device means that it is likely to be accurate in identifying them.

– Ease of Use. Apple’s  Face ID is easy to use and is likely to become a preferred way of authentication by users.

– Better Fraud Detection. Companies and individuals using facial authentication may have a better chance of detecting attempted fraud (as it happens) than other current systems. Companies that use facial authentication with systems may, therefore, be able to more confidently assess risk, minimise fraud losses, and provide better protection for company and customer data.

– Faster For Users. Face ID, for example, saves time compared to other methods.

– Cross-Platform Portability. 3D face maps can be created on many devices with a camera, and users can enrol using a laptop webcam and authenticate from a smartphone or tablet. Facial authentication can, therefore, be used for many different purposes.

The Future – Biometric Authentication

The small and unique physical differences that we all have (which would be very difficult to copy) make biometric authentication something that’s likely become more widely used going forward.  For example, retinas, irises, voices, facial characteristics, and fingerprints are all ways to clearly tell one person from another. Biometric authentication works by comparing a set of biometric data that is preset by the owner of the device with a second set of biometric data that belongs to a device visitor. Many of today’s smartphones already have facial or fingerprint recognition.

The challenge may be, however, if biometric data is required for entry systems access that is not “on device” i.e. a comparison will have to be made with data stored on a server, thereby adding a possible security risk step.

Human Micro-Chipping

There may be times where we do not have access to our devices, where fast identification is necessary or where we may need to carry data and information that can’t be easily learned or remembered e.g. our medical files.  For these situations,  some people have presented the argument for human micro-chipping where microchip implants, which are cylindrical ‘barcodes’, which can be scanned through a layer of skin to transmit a unique signal.

Neuralink Implant

Elon’s Musk’s Neuralink idea to create an implantable device that can act as an interface between the human brain and a computer could conceivably be used in future for advanced authentication methods.

Looking Forward

The benefits of facial recognition e.g. for crime prevention and law enforcement are obvious but for the technology to move forwards, regulatory hurdles, matters of public trust and privacy, and technical challenges e.g. bias, and accuracy need to be overcome.

Facial authentication provides a fast, accurate and easy way for people to prove that they are who they say they are.  This benefits both businesses and their customers in terms of security, speed and convenience as well as improving efficiency.

More services where sensitive data is concerned e.g. financial and medical services, and government agency interactions are likely to require facial authentication in the near future.

AI-Faked Photos and Videos Concerns

Social media analytics company Graphika has reported identifying images of faces for social media profiles that appear to have been faked using machine learning for the purpose of China-based anti-U.S. government campaigns.

Graphika Detects

Graphika, which advertises a “Disinformation and Cyber Security” service (whereby it can detect strategic influence on campaigns) has reported detecting AI-generated fake profile pictures and videos that were being used to attack American policy and the administration of U.S. President Donald Trump in June, at a time when the rhetoric between the United States and China had escalated.

The Graphika website has posted a 34-page file online detailing the findings of what it is calling the “Spamouflage” campaign.  See: https://public-assets.graphika.com/reports/graphika_report_spamouflage_goes_to_america.pdf

Spamouflage Dragon

The China-based network that, according to Graphika, has been making and spreading the anti-U.S. propaganda material via social media has been dubbed “Spamouflage Dragon”.  Graphika says that Spamouflage Dragon’s political disinformation campaigns started in 2019, focusing on attacking the Hong Kong protesters and exiled Chinese billionaire Guo Wengui (a critic of the Chinese Communist Party), and more recently focused on the U.S and the Trump administration.

Two Differences This Time

The two big differences in Spamouflage Dragon’s anti-U.S. campaign compared to its anti-Hong Kong protester campaign appear to be:

1. The use of English-language content videos, many of which appear to have been made in less than 36 hours.

2. The use of AI-generated profile pictures that appear to have been made by using Generative Adversarial Networks (GAN).  This is a class of machine-learning frameworks that allows computers to generate synthetic photographs of people.

Faked Profile Photos and Videos

Graphika reports that Spamouflage Dragon’s U.S. propaganda attacks have taken the form of:

– AI-generated photos used to create fake followers on Twitter and YouTube.  The photos, which were made up by GAN from a multitude of images that have been taken from stolen profile photos from different social media networks were recognisable as fake because they all had the same blurred-out background, asymmetries where there should be symmetries, and the eyes of the subjects were all looking straight ahead.

– Videos made in English, and targeting the United States, especially its foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok.  The videos were easily identified as fake due to being clumsily made with language errors and automated voice-overs.

What Does This Mean For Your Business?

With a presidential election just around the corner in the U.S. and with escalating tensions between the super-power nations, the fact that these videos, AI-generated photos and their fake accounts can be so quickly and easily produced is a cause for concern in terms of their potential for political influence and interference.

For businesses, the use of this kind of technology could be a cause for concern if used as part of a fraud or social engineering attack. Criminals using AI-generated fake voices, photos and videos to gain authorisation or to obtain sensitive data is a growing threat, particularly for larger enterprises.

AI, Data Protection & The ICO

The Information Commissioner’s Office (ICO) has published guidelines to help clarify how data protection principles apply to AI projects.

The Document

The guidance document (now a pdf available online on the ICO website) was produced by an associate professor in the Department of Computer Science at the University of Oxford and is aimed at those with a compliance focus e.g. data protection officers (DPOs), risk managers and ICO auditors, and at the many different technology specialists involved in AI.  The guidance document is designed to act as a framework for auditing AI, focusing on best practices for data protection compliance and as “an aide-memoire to those running AI projects”.   The ICO guidance document can be found here: https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection-0-0.pdf

Why?

The ICO document notes how there is a range of risks involved in using technologies that shift the processing of personal data to complex computer systems with often opaque approaches and algorithms. These risks could include the loss or misuse of the kinds of personal data that is required (in large quantities) to train AI systems or software vulnerabilities that are the result of adding AI-related code and infrastructure.

With this in mind, the ICO has produced a set of guidelines that could help organisations involved in AI projects to mitigate those risks by being able to see how data protection principles apply to their AI project without detracting from the benefits the AI project could deliver.

What?

The guidance document, which clarifies the distinction between a “controller” and a “processor” in an AI project and covers the kind of bias in data sets that leads to AIs making biased decisions, also seeks to provide vital guidance in areas related to the general legal principle of accountability (for data) and support and methodologies on how best to approach AI work. The document also seeks to cover aspects of the law that require greater thought, such as data minimisation, transparency of processing and ensuring individual rights around potentially automated decision-making.

Existing Guidance

The ICO points out that some aspects of this new guidance document are complemented by an existing ICO guidance document ‘Explaining decisions made with AI guidance’, published with the Alan Turing Institute in May 2020.

What Does This Mean For Your Business?

With more businesses now getting involved in AI projects, and with AI requiring, for example, large amounts of personal data to ‘train’ AI systems, and with the algorithms involved being so complicated, expert guidance of how to mitigate the data protection risks will, no doubt, be welcomed.  Having an AI auditing framework to hand could help businesses to avoid potentially costly data protection law breaches and could help them to approach and manage AI projects in a way that promotes best practice.

Voice and Contactless Technologies For a Safer Workplace

With businesses looking to ensure COVID-safe working conditions, the use of voice and contactless interfaces could help provide safer ways of carrying out daily work tasks.

Report

A recent report by 451 Research states that technology generally will play a crucial role for businesses continuity post-lockdown and that in the past 2 years (in the U.S.) there has been increased interest in the use of voice interfaces in the workplace, with voice-activated interfaces and digital assistants being among the top disruptive technologies that organisations were looking to adopt.  The report also highlights how, in the past year there has been a growing number of speech-enabled devices designed specifically for the workplace, such as desk phones, meeting room equipment and hearable devices.  Also, voice-enabled intelligent assistants have been integrated with meeting room equipment and team collaboration workflows.

Conclusions

Now, in the post-COVID-19 business environment, the report concludes that:

– These technologies and integrations could become particularly valuable in making the workplace safer and more contactless and help to retain the necessary physical distancing for those who are required to return.

– The need to provide a safe workplace will see organisations accelerating digital transformation initiatives, driving adoption of voice interfaces, biometrics, and real-time communications.

– Voice user interfaces, real-time communications and location management services will be used to help support frontline workers as well as helping organisations to further automate their operations.

Examples

Some of the examples cited in the report of those who currently use voice technology and who could benefit from using more of it include frontline workers such as nurses and doctors, first responders, factory workers, grocery store employees, drivers, and food and grocery and delivery gig workers.

It should be remembered that the report is U.S. based, where integrations of voice-enabled intelligent assistants with meeting room equipment and team collaboration workflows deployed in the workplace began around three years ago, and where intelligent assistants e.g. Amazon Alexa, Apple Siri are ubiquitous for the average consumer. Also, in the U.S, Amazon has been active in expanding its Alexa for Business that uses voice commands for e.g. managing meetings and controlling conference room devices.

Challenges

Some of the challenges that businesses in the UK face in addition to the market conditions and making the office/workplace physically safe on a daily basis are how they offer good service levels with many staff still not back at the office and how they can quickly and affordably take advantage of the benefits of technology to make things work and remain competitive.

Change

With social distancing looking as though it will need to be in place for many months to come, yet with many returning to workplaces in the UK after becoming used to working remotely (facilitated by technology), there is now an expectation that (and a necessity for) workplaces to change in order to maximise safety for the users of all work buildings.

Ways that businesses in the UK could operate safely, embrace technology, and move forward could include:

– Making more use of Alexa for Business, Microsoft’s Cortana, biometrics, and chatbots with AI for speech recognition.

– Workers continuing to make use of as Microsoft Teams, Slack, or Zoom, and using simple chatbots and other speech-based technologies e.g. voice-to-text transcription.

– Making even greater use of the Cloud.

What Does This Mean For Your Business?

Just as governments have to balance public health with the economy, most businesses will need key people to return to their premises and will already be at least in the process of physically creating an environment, working routines and policies that ensure maximum safety within available guidelines.

The realisation by many managers and their employees that technology can successfully be quickly mastered and used to keep critical parts of the business going during lockdown looks likely to contribute to serious consideration being given to the use of more technologies such as voice and contactless technologies going forward.

Are Masks A Challenge To Facial Recognition Technology?

In addition to questions about the continued use of potentially unreliable and unregulated live facial recognition (LFR) technology, masks to protect against the spread of coronavirus may be presenting a further challenge to the technology.

Questions From London Assembly Members

A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick have asked whether the LFR technology could be withdrawn during the COVID-19 pandemic on the ground that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. 

Also, concerns are now being raised about how the already questionable accuracy of LFR could be challenged further by people wearing face masks to curb the spread of COVID-19.

Civil Liberties of Londoners

The two London Assembly members argue in the letter that a lack of laws, national guidelines,  regulations and debate about LFR’s use could mean that stopping Londoners or visitors to London “incorrectly, without democratic public consent and without clear justification erodes our civil liberties”.  The pair also said that this could continue to erode trust in the police, which has been declining anyway in recent years.

Inaccurate

The letter highlights concerns about the general inaccuracy of LFR. This is illustrated by the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified.

Others Concerns

Other concerns by the pair outlined in the letter about the continued deployment of LFR include worries about the possibility of mission creep, the lack of transparency about which watchlists are being used, worries that LFR will be used operationally at protests, demonstrations, or public events in future e.g. Notting Hill Carnival, and fears that the technology will continue to be used without clarity, accountability or full democratic consent

Masks Are A Further Challenge

Many commentators from both sides of the facial recognition debate have raised concerns about how the wearing of face masks could affect the accuracy of facial recognition technology.

China and Russia

It has been reported that Chinese electronics manufacturer Hanwang has produced facial recognition technology that is 95% accurate in identifying the faces of people who are wearing masks.

Also, in Moscow, where the many existing cameras have been deployed to help enforce the city’s lockdown and to identify those who don’t comply, systems have been able to identify those wearing masks.

France

In France, after the easing of lockdown restrictions, it has been reported that surveillance cameras will be used to monitor compliance with social distancing and the wearing of masks.  A recent trial in Cannes using French firm Datakalab’s surveillance software, which includes an automatic alert to city authorities and police for breaches of mask-wearing and social distancing rules looks set to be rolled out to other French cities.

What Does This Mean For Your Business?

Facial recognition is another tool which, under normal circumstances (if used responsibly as intended) could help to fight crime in towns and city centres, thereby helping the mainly retail businesses that operate there.  The worry is that there are still general questions about the accuracy of LFR, its impact on our privacy and civil liberties and that the COVId-19 pandemic could be used as an excuse to use it more and in a way that leads to mission creep. It does appear that in China and Russia for example, even individuals wearing face masks can be identified by facial recognition camera systems, although many in the west regard these as states where a great deal of control on the privacy and civil liberties population is exercised and may be alarmed at such systems being used in the UK.  The pandemic, however, appears to be making states less worried about infringing civil liberties for the time being as they battle to control a virus that has devastated lives and economies, and technology must be one of the tools being used in the fight against COVID-19.

Featured Article – Facial Recognition and Super Computers Help in COVID-19 Fight

Technology is playing an important role in fighting the COVID-19 pandemic with adapted facial recognition cameras and super-computers now joining the battle to help beat the virus.

Adapted Facial Recognition

Facial recognition camera systems have been trialled and deployed in many different locations in the UK which famously include the 2016 and 2017 Notting Hill Carnivals, the Champions League final day June 2017 in Cardiff,  the Kings Cross Estate in 2019 and in a deliberately “overt” trial of live facial recognition technology by the Metropolitan Police in the centre of Romford, London, in January 2019.  Although it would be hard to deny that facial recognition technology (FRT) could prove to be a very valuable tool in the fight against crime, issues around its accuracy, bias and privacy have led to criticism in the UK from the Information Commissioner about some of the ways it has been used, while (in January) the European Commission was considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use were put in place.

However, one way that some facial recognition systems have been adapted to help in the fight against COVID-19 include the incorporation of temperature screening.

Thermographic Temperature-Screening

In the early news reports of the initial spread of COVID-19 in China, news reports focused on how thermographic, temperature-screening cameras backed up by AI could be used to pick out people from crowds who displayed a key symptom, notably a raised temperature.

These systems are also likely to play a role in our post-lockdown, pre-vaccine world as one of many tools, systems, and procedures to improve safety as countries try to re-start their economies on the long road back.

In the UK – Facial Recognition Combined With ‘Fever Detection System’

In the UK, an AI-powered facial recognition system at Bristol Airport is reported to have been adapted to incorporate a ‘fever detection system’, developed by British technology company SCC. This means that the existing FRT system has been augmented with thermographic cameras that can quickly spot people, even in large moving groups (as would normally happen in airports) who have the kind of raised temperatures associated with COVID-19.

In Russia – Facial Recognition Combined With Digital Passes on Phones

It has also been reported that, as far back as March, officials in Moscow have been using the city’s network of tens of thousands of security cameras, which can offer instant, real-time facial recognition of citizens in combination with digital passes on mobile phones. It has been reported that the sheer number of cameras in Moscow, which can also be used to measure social distancing and detect crowds, coupled with the sophisticated FRT at the back-end is enough to ensure that those who are supposed to be in isolation can be detected even if they come outside their front door for a few seconds.  Moscow’s facial recognition system is also reported to be able to identify a person correctly, even if they are wearing a face mask.

Supercomputers

One of the great advantages of supercomputers is that they can carry out staggering numbers of calculations per second, thereby being able to solve complicated problems in a mere fraction of the time that it would take other computers to do the same thing.  Supercomputers are, therefore, now being used in the fight against coronavirus. For example:

– Scientists at the University of Texas at Austin’s Texas Advanced Computing Centre (TACC) in the U.S. are using a Frontera supercomputer and a huge computer model of the coronavirus to help researchers design new drugs and vaccines.

– University College London (UCL) researchers, as part of a consortium of over a hundred researchers from across the US and Europe, are using some of the world’s most powerful supercomputers (including the biggest one in Europe and the most powerful one in the world) to study the COVID-19 virus and thereby help develop effective treatments and, hopefully, a vaccine.  The researchers have been using the Summit at Oak Ridge National Lab, USA (1st) and SuperMUC-NG at GCS@LRZ, Germany (9th)  supercomputers to quickly search through existing libraries of compounds that could be used to attach themselves to the surface of the novel coronavirus.

– In the U.S. the COVID-19 High-Performance Computing (HPC) Consortium, a combined effort by private-public organisations, the White House Office of Science and Technology Policy, U.S. government departments and IBM are bringing together federal government, industry, and academics who are offering free computing time and resources on their supercomputers to help to understand and beat the coronavirus.

Looking Ahead

Facial recognition cameras used by police and government agencies have been the focus of some bad press and questions over a variety of issues, but the arrival of the pandemic has turned many things on their heads. The fact is that there are existing facial recognition camera systems which, when combined with other technologies, could help to stop the spread of a potentially deadly disease.

With vaccines normally taking years to develop, and with the pandemic being a serious, shared global threat, it makes sense that the world’s most powerful computing resources should be (and are being) deployed to speed up the process of understanding the virus and of quickly sorting through existing data and knowledge that could help.

Google’s Drone-Deliveries Boosted By Pandemic

The value of drone delivery services appears to have been realised now that the world’s population centres are in lockdown, with Alphabet’s (Google’s) drone deliveries doubling in test areas in the U.S. and Australia.

What Drone Delivery Service?

Alphabet Inc.’s Wing service offers parcel delivery by special drone aircraft.  In the U.S. the service was approved by the federal government last October but is being operated in a limited test area around Christiansburg, Virginia.  It is operating using partnerships with FedEx Corp., the Walgreens store chain (for medicine, toilet roll and similar deliveries), and with a local bakery and a coffee shop. Wing is also working as part of an approved program with Virginia Tech.

Alphabet’s Wing also has a drone delivery service in the Vuosaari district of Helsinki in Finland and in Canberra, Australia where it delivers goods from a variety of vendors including Mitchell Supermarket, Krofne Donuts and even Drummond Golf (golf balls, tees and gloves).

It is the drone deliveries in the Christiansburg, Virginia area of the U.S. and in Canberra, Australia that are reported to have doubled their deliveries in response to demand from customers who are staying at home.

Other Drone Delivery Services

Wing is, of course, not the only drone delivery service.  Amazon’s Prime Air delivery service, which made test deliveries as far back as 2016 and 2017 still exists but is described by Amazon as “a future delivery system” which has “great potential”, but does seem to have gone somewhat quiet since the much-publicised tests.

In The UK

Drone services are already in operation in the UK, offering a variety of services and performing a number of duties.  In addition to drones used in the promotions and film industries, UK agencies also use drones.  For example, back in 2017, Suffolk Fire and Rescue Service and multi-agency partners (Fire and Rescue, Constabulary, County Council and others) launched a shared drone service to provide a range of aerial surveillance options in support of emergency services and voluntary organisations.

Drones In The Pandemic and Beyond

Reports of other uses of drones in the pandemic and beyond include:

– Reports from Jerusalem that Israeli police have been using drones outside apartment buildings to check whether people who have been ordered to self-isolate are doing so.

– Spanish police and the French police using drones with speakers around public places to warn people to go home.

– The University of South Australia (UniSA) and Canada-based drone technology specialist ‘Draganfly’ teaming up to create a drone that can use sensors and computer vision to spot people with infectious respiratory diseases.

What Does This Mean For Your Business?

Clearly, drone delivery options are still a long way off for most of us, but the pandemic has highlighted more elements of value in them that are being applied in the test areas for local shop deliveries during the pandemic, and for use in disease control on the post-pandemic modern world that we now find ourselves entering.  Drones have also been used for medical purposes (live organ delivery) and could prove valuable again for moving medical and other help into closed-off areas where there is disease in future.

For now, and in the near future, we are still waiting for the tech giants in conjunction with business partners to expand the scale and scope of drone delivery so that it can begin to add value and provide a competitive edge for all kinds of businesses and organisations.

Viruses Killed By Robots

Robots armed with UV-C ultraviolet light beams that can effectively disinfect surfaces in a hospital room in 10-20 minutes are helping in the fight against COVID-19.

UVD Robots, Denmark

The robots, which are reported to have been shipped in considerable numbers to Wuhan in China, Asia, and parts of Europe are manufactured in Denmark’s third-largest city, Odense, by the UVD Robots company.  The manufacturers say that if used as part of a regular cleaning cycle, they could prevent and reduce the spread of infectious diseases, viruses, bacteria, as well as other types of harmful organic microorganisms.

Breaks Down DNA

These smart robots, which look a little like a printer on wheels with several light-sabres arranged vertically in a circle on top, can autonomously clean traces of viruses from a room by ‘burning’ them from surfaces using UV-Wavelength: 254NM (UV-C light) in a way that breaks down the DNA-structure of the virus.

Research and Testing

The UVD robots are the product of 6 years research, design, development, and testing by leading, reputable organisation Blue Ocean Robotics, and the Danish Healthcare Authority (supported by leading microbiologists and hygiene specialists from Odense University Hospital).

How?

The Ultraviolet germicidal irradiation (UVGI) method of disinfection, which has been in accepted use since the mid-20th century, involves using short-wavelength ultraviolet (UV-C) to disrupt the DNA of microorganisms so that they can no longer carry out cellular functions.

Features

The features of UVD’s cleaning robots include 360-degree disinfection coverage, a 3-hour battery charge, and software and sensor-based safety features.  The operating time per charge for the UV module is 2-2.5 hours (equal to 9-10 rooms).  It is claimed that these units can kill up to 99.99 per cent of bacteria.

HAIs

The primary purpose of the robots is to help and improve quality of care for hospitals and healthcare facilities around the world by providing an effective, low human risk, 24-hour available way to eradicate the kind of Hospital Acquired Infections (HAIs) which affect millions of patients (and kill several thousand) each year.

The COVID-19 outbreak which has led to many healthcare environments being overwhelmed with large numbers of patients has, therefore, made the need for this kind of cleaning/disinfecting system seem very attractive.

What Does This Mean For Your Business?

Now, more than ever in living memory, having a device that can simply, automatically, quickly and effectively get on with the cleaning of hospital rooms on-demand, without worrying about infection (as may be the case for human cleaners), and without putting more human resource demands on hospitals must be invaluable, and would account for the increase in orders internationally. Devices like these show how a combination of technologies can be combined to create real value and tackle a problem in an effective way that could benefit all of us.