Viruses Killed By Robots

Robots armed with UV-C ultraviolet light beams that can effectively disinfect surfaces in a hospital room in 10-20 minutes are helping in the fight against COVID-19.

UVD Robots, Denmark

The robots, which are reported to have been shipped in considerable numbers to Wuhan in China, Asia, and parts of Europe are manufactured in Denmark’s third-largest city, Odense, by the UVD Robots company.  The manufacturers say that if used as part of a regular cleaning cycle, they could prevent and reduce the spread of infectious diseases, viruses, bacteria, as well as other types of harmful organic microorganisms.

Breaks Down DNA

These smart robots, which look a little like a printer on wheels with several light-sabres arranged vertically in a circle on top, can autonomously clean traces of viruses from a room by ‘burning’ them from surfaces using UV-Wavelength: 254NM (UV-C light) in a way that breaks down the DNA-structure of the virus.

Research and Testing

The UVD robots are the product of 6 years research, design, development, and testing by leading, reputable organisation Blue Ocean Robotics, and the Danish Healthcare Authority (supported by leading microbiologists and hygiene specialists from Odense University Hospital).

How?

The Ultraviolet germicidal irradiation (UVGI) method of disinfection, which has been in accepted use since the mid-20th century, involves using short-wavelength ultraviolet (UV-C) to disrupt the DNA of microorganisms so that they can no longer carry out cellular functions.

Features

The features of UVD’s cleaning robots include 360-degree disinfection coverage, a 3-hour battery charge, and software and sensor-based safety features.  The operating time per charge for the UV module is 2-2.5 hours (equal to 9-10 rooms).  It is claimed that these units can kill up to 99.99 per cent of bacteria.

HAIs

The primary purpose of the robots is to help and improve quality of care for hospitals and healthcare facilities around the world by providing an effective, low human risk, 24-hour available way to eradicate the kind of Hospital Acquired Infections (HAIs) which affect millions of patients (and kill several thousand) each year.

The COVID-19 outbreak which has led to many healthcare environments being overwhelmed with large numbers of patients has, therefore, made the need for this kind of cleaning/disinfecting system seem very attractive.

What Does This Mean For Your Business?

Now, more than ever in living memory, having a device that can simply, automatically, quickly and effectively get on with the cleaning of hospital rooms on-demand, without worrying about infection (as may be the case for human cleaners), and without putting more human resource demands on hospitals must be invaluable, and would account for the increase in orders internationally. Devices like these show how a combination of technologies can be combined to create real value and tackle a problem in an effective way that could benefit all of us.

AI Skills Course Available – Free of Charge

A free, basic AI skills course, funded by Finland’s Ministry of Economic Affairs and Employment (MEAE), is being made available to citizens across the EU’s 27 member states. 

Success in Finland

The decision by the Finnish government to make the course available online across the EU to an estimated five million Europeans (1% of the total population of EU states) in the 2020-2021 academic year was boosted by the popularity of a test run of the course in Finland back in 2018.

The Course

The six-chapter ‘Elements of AI’ course, which is still open to UK citizens, is aimed at de-mystifying and providing a critical and customised understanding of AI, offers a basic understanding of what AI is, how it can be used to boost business productivity, and how it will affect jobs and society in the future. The six chapters of the course can be studied in a structured or ‘own-pace’ way and cover the topics of What is AI?, AI problem solving, real-world AI, machine learning, neural networks and implications.

The course is available in six languages – English, German, Swedish, Estonian, Norwegian and Finnish.

Run by the University of Helsinki, the course represents a way in which a university can play a role in reaching a Europe-wide, cross-border audience and build important competencies for the future across that area.

Gift

The provision of the online course, which is funded by the MEAE to an estimated cost of €1.7m a year is essentially a gift from Finland, not just to leaders of fellow EU states but to the people of EU countries to mark the end of Finland’s six-month rotating Presidency of the Council of the EU.  It is the hope, therefore, that Finland’s gift will have real-world value in terms of helping to develop digital literacy in the EU.

You can sign up for the course here: https://www.elementsofai.com/

170 Countries

It’s claimed that to date, the free online AI course has been completed by students from over 170 countries and that around 40 % of course participants are women, which is more than double the average for computer science courses.

What Does This Mean For Your Business?

With a tech skills shortage in the UK, with AI becoming a component in an increasing number of products and services, and with the fact that you can very rarely expect to get something of value for nothing, this free online course could be of some value to businesses across Europe.  The fact that the course is delivered online with just a few details needed to enrol makes it accessible, and the fact that it can be tackled in a structured way or at your own pace makes it convenient.  It’s also refreshing to see a country giving a gift to millions of citizens rather than just to other EU leaders and the fact that more women are taking the course must be good news for the tech and science sectors. Anything that can effectively, quickly and cheaply make a positive difference to digital literacy in the EU is likely to end up benefitting businesses across Europe.  Also, even though the UK’s now out of the EU, it’s a good job that we’re still able to access the course.

Amazon Offering Custom ‘Brand Voice’ to Replace Default Alexa Voice

Amazon’s AWS is offering a new ‘Brand Voice’ capability to companies which enables them to create their own custom voice for Alexa that replaces the default voice with one that reflects their “persona”, such as the voice of Colonel Sanders for KFC.

Brand Polly

The capability is being offered through Amazon’s ‘Brand Polly’, the cloud service by Amazon Web Services (AWS), that converts text into lifelike speech.  The name ‘Polly’ is a reference to parrots which are well-known for being able to mimic human voices.

Amazon says that companies can work with the Amazon Polly team of AI research scientists and linguists to build an exclusive, high-quality, Neural Text-to-Speech (NTTS) voice that will represent the “persona” of a brand.

Why?

According to Amazon, the ‘Brand Voice’ will give companies another way to differentiate their brand by incorporating a unique vocal identity into their products and services. Hearing the ‘Brand Voice’ of a company is also another way to help create an experience for customers that strengthen the brand, triggers the brand messages and attitudes that a customer has already assimilated through advertising, and helps to provide another element of consistency to brand messages, communications and interactions.

How?

The capability involves using deep learning technology that can learn the intonation patterns of natural speech data and reproduce from that a voice in a similar style or tone. For example, in September, Alexa users were given the option to use the voice of Samuel L. Jackson for their Alexa and in order to produce the voice, the NTTS models were ‘trained’ using hours of recorded dialogue rather than the actor being required to read new dialogue for the system.

Who?

Amazon Polly says on its website that it has already been working with Kentucky Fried Chicken (KFC) Canada (for a Colonel Sanders-style brand voice) and with National Australia Bank (NAB), using “the same deep learning technology that powers the voice of Alexa”.

Uses

The ‘Brand Voice’ created for companies can, for example, be used for call centre systems (as with NAB).

What Does This Mean For Your Business?

The almost inevitable ‘Brand Voice’ move sees Amazon taking another step to monetizing Alexa and moving more into the business market where there is huge potential for modifications and different targeted and customised versions of Alexa and digital assistants.  Back in April last year, for example, Amazon launched its Alexa for Business Blueprints, which is a platform that enables businesses to make their own Alexa-powered applications for their organisation and incorporate their own customised, private ‘skills’. The announcement of ‘Brand Voice’, therefore, is really an extension of this programme.  For businesses and organisations, Alexa for Business and ‘Brand Voice’ offers the opportunity to relatively easily customise some powerful, flexible technology in a way that can closely meet their individual needs, and provide a new marketing and communications tool that can add value in a unique way.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

Glimpse of the Future of Tech at CES Expo Show

This week, at the giant CES expo in Las Vegas, the latest technology from around the world is on display, and here are just a few of the glimpses into the future that are being demonstrated there, with regards to business-tech.

Cyberlink FaceMe®

Leading facial recognition company Cyberlink will be demonstrating the power of its highly accurate FaceMe® AI engine. The FaceMe® system, which Cyberlink claims has an accuracy rate (TAR, True Acceptance Rate) of 99.5% at 10-4 FAR, is so advanced that it can recognise the age, gender and even the emotional state of passers-by and can use this information to display appropriate adverts.

D-ID

In a world where facial recognition technology is becoming more prevalent, D-ID recognise the need to protect the sensitive biometric data that makes up our faces. On display at CES expo is D-ID’s anti facial recognition solution which uses an algorithm, advanced image processing and deep learning techniques to re-synthesise any given photo to a protected version so that photos are unrecognisable to face recognition algorithms, but humans will not notice any difference.

Hour One

Another interesting contribution to the Las Vegas CES expo is Hour One’s AI-powered system for creating premium quality synthetic characters based on real-life people. The idea is that these very realistic characters can be used to promote products without companies having to hire expensive stars and actors and that companies using Hour One can save time and money and get a close match to their brief due to the capabilities, scale/cope and fast turnaround that Hour One offers.

Mirriad

Also adding to the intriguing and engaging tech innovations at the expo, albeit at private meetings there, is Mirriad’s AI-powered solution for analysing videos, TV programmes and movies for brand/product insertion opportunities and enabling retrospective brand placements in the visual content. For example, different adverts can be inserted in roadside billboards and bus stop advertising boards that are shown in pre-shot videos and films.

What Does This Mean For Your Business?

AI is clearly emerging as an engine that’s driving change and creating a wide range of opportunities for business marketing as well as for security purposes. The realism and accuracy, flexibility, scope, scale, and potential cost savings that AI offers could provide many beneficial business opportunities. The flipside for us as individuals and consumers is that, for example, as biometric systems (such as facial recognition) offers us some convenience and protection from cyber-crime, they can also threaten our privacy and security. It is ironic and probably inevitable, therefore, that we may need and value AI-powered protection solutions such as D-ID to protect us.

AI Better at Breast Cancer Detection Than Doctors

Researchers at Good Health have created an AI program which, in tests, has proven to be more accurate at detecting and diagnosing breast cancer than expert human radiologists.

Trained

The AI software, which was developed by Good Health researchers in conjunction DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital was ‘trained’ to detect the presence of breast cancer using X-ray images (from mammograms) from nearly 29,000 women.

Results

In the UK tests, compared to one radiologist, the AI program delivered a reduction of 1.2% in false positives, when a mammogram was incorrectly diagnosed as abnormal, and a reduction of 2.7% in false negatives, where the cancer was missed. These positive results were even greater for the US tests.

In another separate test, which used the program trained only on UK data then tested it against US data (to determine its wider effectiveness), there was a very respectable 3.5% reduction in false positives and 8.1% reduction in false negatives.

In short, these results appear to show that the AI program, which outperformed six radiologists in the reading of mammograms and only had mammograms to go on (human radiologists also have access to medical history) is better at spotting cancer than a single doctor, and equally as good as spotting cancer as the current double-reading system of two doctors.

Promising

Even though these initial test results have received a lot of publicity and appear to be very positive, bearing in mind the seriousness of the condition, AI-based systems of this kind still have some way to go before more research, clinical studies and regulatory approval brings them into the mainstream of healthcare services.

What Does This Mean For Your Business?

This faster and more accurate way of spotting and diagnosing breast cancer by harnessing the power of AI could bring many benefits. These include reducing stress for patients by shortening diagnosis time, easing the workload pressure on already stretched radiologists, and going some way towards helping bridge the UK’s current shortage of radiologists who need to be trained to read mammograms (which normally takes more than 10 years training). All this could mean greater early diagnosis and survival rates.

For businesses, this serves as an example of how AI can be trained and used to study some of the most complex pieces of information and produce results that can be more accurate, faster, and cheaper than humans doing the same job, remembering that, of course, AI programs work 24/7 without a day off.

Google Announces New ‘Teachable Machine 2.0’ No-Code Machine Learning Model Generator

Two years on from its first incarnation, Google has announced the introduction of its ‘Teachable Machine 2.0’, a no-code custom machine learning model generating platform that can be used by anyone and requires no coding experience.

First Version

Back in 2017, Google introduced its first version of Teachable Machine which enabled anyone to teach their computer to recognise images using a webcam. This first version enabled many children and young people to gain their first experience of training their own machine learning model i.e. teaching their computer how to recognise patterns in data (images) and assign new data to categories.

Teachable Machine 2.0

Google’s new ‘Teachable Machine 2.0’ is a browser-based system that records from the user’s computer’s webcam and microphone, and with the click of a ‘train’ button (no coding required), it can be trained to recognise images, sounds or poses.  This enables the user to quickly and easily create their own custom machine learning models which they can download and use on their own device or upload and host online.

Fear-Busting and Confidence

One of the key points that Google wants to emphasise is that the no-code, click-of-a-button aspect of this machine learning model generator can instil confidence in young users that they are able to successfully use advanced computer technology creatively without coding experience.  This, as Google mentions on its blog, has been identified as being important by parents of girls as girls face challenges in becoming interested in and finding available jobs in computer science.

What Can It Be Used For?

In addition to being used as a teaching aid, examples of how Teachable Machine 2.0 has been used include:

  • Improving communication for people with impaired speech. For example, this has been done by turning recorded voice samples into spectrograms that can be used to “train” a computer system to better recognise less common types of speech
  • Helping with game design.
  • Making physical sorting machines. For, example, Google’s own project has used Teachable Machine to create a model that can classify and sort objects.

What Does This Mean For Your Business?

The UK has a tech skills shortage that has been putting pressure on UK businesses that are unable to find skilled people to drive innovation and tech product and service development forward.  A platform that enables young people to feel more confident and creative in using the latest technologies from a young age without being thwarted by the need for coding could lead to more young people choosing computer science in further and higher education and seeking careers in IT.  This, in turn, could help UK businesses.

No-coding solutions such as Teachable Machine 2.0 represent a way of democratising app and software development and utilising ideas and creativity that may have previously been suppressed by a lack of coding experience.  Businesses always need creativity and innovation in order to create new opportunities and competitive advantage and Teachable Machine 2.0 may be one small step in helping that to happen further down the line.

ICO Warns Police on Facial Recognition

In a recent blog post, Elizabeth Denham, the UK’s Information Commissioner, has said that the police need to slow down and justify their use of live facial recognition technology (LFR) in order to maintain the right balance in reducing our privacy in order to keep us safe.

Serious Concerns Raised

The ICO cited how the results of an investigation into trials of live facial recognition (LFR) by the Metropolitan Police Service (MPS) and South Wales Police (SWP) led to the raising of serious concerns about the use of a technology that relies on a large amount of sensitive personal information.

Examples

In December last year, Elizabeth Denham launched the formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

MPs Also Called To Stop Police Facial Recognition

Back in July this year, following criticism of the Police usage of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee called for a temporary halt in the use of the facial recognition system.

Stop and Take a Breath

In her blog post, Elizabeth Denham urged police not to move too quickly with FRT but to work within the model of policing by consent. She makes the point that “technology moves quickly” and that “it is right that our police forces should explore how new techniques can help keep us safe. But from a regulator’s perspective, I must ensure that everyone working in this developing area stops to take a breath and works to satisfy the full rigour of UK data protection law.”

Commissioners Opinion Document Published

The ICO’s investigations have now led her to produce and publish an Opinion document on the subject, as is allowed by The Data Protection Act 2018 (DPA 2018), s116 (2) in conjunction with Schedule 13 (2)(d).  The opinion document has been prepared primarily for police forces or other law enforcement agencies that are using live facial recognition technology (LFR) in public spaces and offers guidance on how to comply with the provisions of the DPA 2018.

The key conclusions of the Opinion Document (which you can find here: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf) are that the police need to recognise the strict necessity threshold for LFR use, there needs to be more learning within the policing sector about the technology, public debate about LFR needs to be encouraged, and that a statutory binding code of practice needs to be introduced by government at the earliest possibility.

What Does This Mean For Your Business?

Businesses, individuals and the government are all aware of the positive contribution that camera-based monitoring technologies and equipment can make in terms of deterring criminal activity, locating and catching perpetrators (in what should be a faster and more cost-effective way with live FRT), and in providing evidence for arrests and trials.  The UK’s Home Office has also noted that there is general public support for live FRT in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  However, the ICO’s apparently reasonable point is that moving too quickly in using FRT without enough knowledge or a Code of Practice and not respecting the fact that there should be a strict necessity threshold for the use of FRT could reduce public trust in the police and in FRT technology.  Greater public debate about the subject, which the ICO seeks to encourage, could also help in raising awareness about FRT, how a balanced approach to its use can be achieved and could help clarify matters relating to the extent to which FRT could impact upon our privacy and data protection rights.