Bribes for bias: Can AI be corrupted?
The potential abuse of artificial intelligence for private gain has profound implications for our economic, political and social lives
Riding into a brave new world? Image generated by Stable Diffusion, a text-to-image diffusion model that generates photorealistic images given any text input.
Posted on: 27 February 2023
Recently your social media feed may have been flooded with headlines on the advances in Artificial Intelligence (AI) or even AI-generated images. Text-to-image algorithms such as Dall-E2 and Stable Diffusion are becoming hugely popular. ChatGPT, a chatbot developed by OpenAI, is now the world’s best-performing large language model, reaching 1 million users in its first week – a rate of growth much faster than Twitter, Facebook or TikTok.
As AI demonstrates its ability to craft poetry, write code and even pollinate crops by imitating bees, the governance community is waking up to the impact of artificial intelligence on the knotty problem of corruption. Policy institutes and academics have pointed to the potential use of AI to detect fraud and corruption, with some commentators heralding these technologies as the "next frontier in anti-corruption."
Amid all the excitement, it can be easy to lose sight of the fact that AI can also produce undesirable outcomes due to biased input data, faulty algorithms or irresponsible implementation. To date, most of the negative repercussions from AI that have been documented are unintentional side-effects. However, new technologies present new opportunities to wilfully abuse power, and the effect that AI could have as an “enabler” of corruption has received much less attention.
A recent Transparency International working paper introduces the concept of "corrupt AI" – defined as the abuse of AI systems by public power holders for private gain – and documents how these tools can be designed, manipulated or applied in a way that constitutes corruption.
Politicians, for instance, could abuse their power by commissioning hyper-realistic deepfakes to discredit their political opponents and increase their chances of staying in office. The misuse of AI tools on social media to manipulate elections through the spread of disinformation has already been well documented.
Yet corrupt AI does not just occur when an AI system is designed with malicious intent. It can also take place when people exploit the vulnerabilities of otherwise beneficial AI systems. This becomes of greater concern with the significant push worldwide towards digitalising public administration. Algorithm Watch, for instance, recently concluded that citizens in many countries already live in "automated societies" in which public bodies rely on lines of code to make important social, economic and even political decisions.
Digitalising government services has long been recognised as reducing officials' discretion when making decisions and thereby constraining opportunities for corruption. Yet, as our paper demonstrates, replacing humans with AI brings novel corruption risks. These are four good reasons why the risk of "corrupt AI" should be taken seriously.
1. Deniability and dissonance
People are more likely to behave in a corrupt manner when they are less likely to get caught, such as when they can hide behind plausible deniability. The risk of individuals breaking ethical rules to reap illicit benefits is even higher in circumstances where they are not directly confronted by victims – in other words, when there is a large psychological distance to the people affected by their unethical behaviour.
According to research in behavioural science, the deployment of artificial intelligence systems could enhance both risk factors. Indeed, the complexity and autonomy of machine learning AI systems, which produce outputs that are often incomprehensible to humans based on the input data provided, could make it easier for corrupt manipulation of this technology to escape detection. At the same time, the introduction of AI tools as an intermediary in decision-making processes can increase the psychological distance between perpetrator and victim.
The healthcare sector is one example of an area where these risk factors can undermine the potential benefits of artificial intelligence. Doctors and health sector works are already being trained to use algorithms to help detect diseases and to assist in making healthcare cost estimations. Yet there is some indication that these systems can be easily fooled. By simply changing a few pixels or the orientation of an image, doctors can trick AI image recognition systems to produce faulty results, such as misidentifying a harmless mole as cancerous in order to prescribe expensive treatment. Healthcare workers can similarly reap benefits from manipulating AI systems to classify patients as high-risk and high cost. These concerns are not hypothetical – an influential publication has already warned about this.
2. Scaling up to affect millions
The second reason to take the risk of corrupt AI seriously is its potential to increase the scale of damage caused by an act of corruption. If you bribe a person, you might influence 100 people; if you corrupt an algorithm, you can affect millions.
"Algorithmic capture" describes how AI systems can be manipulated to systematically favour a specific group. For example, tweaking the code of algorithms used in electronic procurement or fraud detection programmes can steer lucrative public contracts to cronies or conceal wrongdoing by certain well-connected entities. While bribing an individual is usually about breaking the rules of the game to get illicit special treatment, corrupting an algorithm by bribing its developer or manipulating its code changes the rules of the game entirely. If an AI system is distorted to allocate resources in a particular way – such as licenses, permits or tax breaks – a new corrupt “rule” can be embedded into the entire system.
3. Fewer people to blow the whistle
The third reason is that replacing humans with AI in public administration reduces reporting and whistleblowing potential. When decision-making authority shifts towards AI, there are fewer people involved who could report instances of corruption. Moreover, humans working in settings where algorithms do the policing and reporting might receive less training, and thereby lose the skills and knowledge needed to detect and report cases of corruption.
4. Secret code and concealed corruption
The final risk factor is opacity. When AI systems are implemented without involving citizens, and code and training data are not disclosed, the threat of corrupt abuse of these systems is higher. For example, investigative efforts have documented biases in face detection algorithms, as well as AI systems used for hiring decisions.
Suppose people developing and implementing such systems want to intentionally encode biases to favour certain demographic groups on a systemic level. In that case, the secrecy of code and data makes the reliable detection of intentional abuse of algorithms challenging to detect. As most AI tools are developed by the private sector, not state entities, reluctance to disclose commercially sensitive information, such as training data and underlying code, is widespread and hinders the auditing of the algorithms.
In authoritarian regimes marked by a weak rule of law, even AI systems created to curb corruption can be abused for corrupt purposes. For instance, take the 'Zero Trust' project implemented by the Chinese government to identify corruption among its workforce of over 60 million public officials by letting AI algorithms cross-reference 150 databases, including public officials' bank statements, property transfers, and private purchases. While nominally intended to raise red flags that could indicate corrupt behaviour, those who control this kind of digital surveillance infrastructure can easily abuse it to advance their narrow private interests or advance their political agenda.
What can be done?
As ever broader swathes of our lives become regulated by AI, what safeguards can be put in place to ensure that we are not exposed to illicit – and often undetectable – abuses of power? Besides general suggestions like strengthening the rule of law, arguably the most promising countermeasure is facilitating checks and balances, ideally as an integral part of the development and deployment process.
One concrete challenge here lies in enforcement. How can private and public companies be forced to submit to oversight processes that may involve outsiders?
An important step would be to establish transparency regulations that mandate code and data to be shared responsibly. Privacy can be safeguarded by uploading data in a masked way; techniques like differential privacy help to remove identifiable information while still allowing the data to be meaningfully analysed. By increasing accessibility, such transparent digital infrastructure facilitates code audits, as it allows data scientists to inspect code and data.
And it’s crucial that everyone has access, not just state authorities. Involving civil society, academics and other citizens in the development, deployment and improvement of AI systems is key – because oversight in public administration is vital to ensure these tools serve the public interest.
For any press inquiries please contact [email protected]