You are currently viewing Don’t Ask Gemini About The Election

Don’t Ask Gemini About The Election

Google has outlined how it will restrict the kinds of election-related questions that its Gemini AI chatbot will return responses to. 

Why? 

With 2024 being an election year for at least 64 countries (including the US, UK, India, and South Africa) the risk of AI being misused to spread misinformation has grown dramatically. This problem extends to a lack of trust by various countries’ governments (e.g. India) around AI’s reliability being taken seriously. There are also worries about how AI could be abused by adversaries of the country holding the election, e.g. to influence the outcome. 

Recently, for example, Google’s AI made the news for when its text-to-image AI tool was overly ‘woke’ and had to be paused and corrected following “inaccuracies.” For example, when Google Gemini was asked to generate images of the Founding Fathers of the US, it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities (a black and an Asian woman) in Nazi uniforms. 

Google also says that its restrictions of election-related responses are being used out of caution and as part of the company’s commitment to supporting the election process by “surfacing high-quality information to voters, safeguarding our platforms from abuse, and helping people navigate AI-generated content.” 

What Happens If You Ask The ‘Wrong’ Question? 

It’s been reported that Gemini is already refusing to answer questions about the US presidential election, where President Joe Biden and Donald Trump are the two contenders. If, for example, users ask Gemini a question that falls into its election-related restricted category, it’s been reported that they can expect Gemini’s response to go along the lines of: “I’m still learning how to answer this question. In the meantime, try Google Search.” 

India 

With India being the world’s largest democracy (about to undertake the world’s biggest election involving 970 million voters, taking 44 days), it’s not surprising that Google has addressed India’s AI concerns specifically in a recent blog post. Google says: “With millions of eligible voters in India heading to the polls for the General Election in the coming months, Google is committed to supporting the election process by surfacing high-quality information to voters, safeguarding our platforms from abuse and helping people navigate AI-generated content.” 

With its election due to start in April, the Indian government has already expressed its concerns and doubts about AI and has asked tech companies to seek its approval first before launching “unreliable” or “under-tested” generative AI models or tools. It has also warned tech companies that their AI products shouldn’t generate responses that could “threaten the integrity of the electoral process.” 

OpenAI Meeting 

It’s also been reported that representatives from ChatGPT’s developers, OpenAI, met with officials from the Election Commission of India (ECI) last month to look at how OpenAI’s ChatGPT tool could be used safely in the election. 

OpenAI advisor and former India head at ‘X’/Twitter, Rishi Jaitly, is quoted from an email to the ECI (made public) as saying: “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections”. 

Could Be Stifling 

However, Critics in India have said that clamping down too much on AI in this way could actually stifle innovation and could lead to the industry being suffocated by over-regulation. 

Protection 

Google has highlighted a number of measures that it will be using to keep its products safe from abuse and thereby protect the integrity of elections. Measures it says it will be taking include enforcing its policies and using AI models to fight abuse at scale, enforcing policies and restrictions around who can run election-related advertising on its platforms, and working with the wider ecosystem on countering misinformation. This will include measures such as working with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India. 

What Does This Mean For Your Business? 

The combination of rapidly advancing and widely available generative AI tools, popular social media channels and paid online advertising look very likely to pose considerable challenges to the integrity of the large number of global elections this year.

Most notably, with India about to host the world’s largest election, the government there has been clear about its fears over the possible negative influence of AI, e.g. through convincing deepfakes designed to spread misinformation, or AI simply proving to be inaccurate and/or making it much easier for bad actors to exert an influence.

The Indian government has even met with OpenAI to seek reassurance and help. The AI companies such as Google (particularly since its embarrassment over its recent ‘woke’ inaccuracies, and perhaps after witnessing the accusations against Facebook after the last US election and UK Brexit vote), are very keen to protect their reputations and show what measures they’ll be taking to stop their AI and other products from being misused with potentially serious results.

Although governments’ fears about AI deepfake interference may well be justified, some would say that following the recent ‘election’ in Russia, misusing AI is less worrying than more direct forms of influence. Also, although protection against AI misuse in elections is needed, a balance must be struck so that AI is not over-regulated to the point where innovation is stifled.