Incidents raising questions on the ethics of AI are increasing
In previous articles, we have explored how artificial intelligence can help us to increase productivity, drive growth in industries (in telecoms and flex companies, for instance), and even accelerate the energy transition. OpenAI’s GPT-4 has even reportedly been proven capable of acing the bar exam. On the flipside, it has also shown its ability to be deceitful, as seen when it falsely claimed to be a visually impaired human in order to convince a worker to complete a Captcha verification request.
Aside from incidents such as these, AI also raises questions on fairness, accountability and transparency. By being trained on masses of historic data, which could already be skewed towards or against a particular group, AI can deepen structural biases in society. Some AI models also act as a black box and fail to transparently provide clear explanations of the results they provide. Even if decisions made by AI are encouraged, it remains unclear as to who is responsible for those decisions – and whether AI can be held accountable at all.
A McKinsey survey reports that organisations using AI surged from 50% in 2022 to 78% by July 2024 (around a 1.5-fold increase). While it is plausible that an increase in usage brings a parallel rise in AI-related incidents, the OECD’s AI Incidents and Hazards Monitor (AIM) reports incidents and hazards doubling over the same time period. A majority relate to the threat of accountability, transparency and human well-being. While the latter poses a direct risk to the UN’s Sustainable Development Goals, a lack of accountability and transparency of AI actors also risks erosion of trust in society and the ability to make informed decisions.
Additionally, over 50% of the cases in AIM were in government sectors, security and defence, media and social platforms, and digital security, which are high-reach and high-impact sectors. A report by RAND outlines the risks of AI in defence and security, such as information manipulation skewing military decisions, and risks to the rule-based order, among others. AI systems have also made it easier to threaten digital security and carry out cyber attacks, which could disrupt critical systems such as healthcare, finance and transportation.

User perception is split between opportunities and risks
A cross-country survey carried out by KPMG and the University of Melbourne on people’s attitudes towards AI showed that users also have worries surrounding the safety of AI. On average, 73% of the respondents claim to have personally observed AI’s benefits, but a large 79% are concerned about the risks.
However, significant cross-country differences persist. On average, a larger proportion of respondents in emerging economies report using AI more, being better trained, and feeling more confident and optimistic about its usage than those in advanced economies. Conversely, respondents from advanced economies appear more worried and less trusting, with a smaller proportion believing that the benefits outweigh risks. Only a little over one-third of respondents in developed economies believe that current laws and regulations are sufficient for making the use of AI safe; more than half in emerging markets believe the same.
While the more widespread use in emerging economies offers more opportunities, the higher trust in AI and in the current regulations surrounding its safety also present a greater exposure to risk. These countries may need to be particularly cautious to ensure that the enthusiasm does not come at the cost of safety.

The importance of a strong ethical compass
While global efforts such as UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence’, the ‘AI Safety Summit’ and the OECD’s AI Policy Observatory are being made towards trustworthy artificial intelligence, AI legislation still largely remains within national jurisdictions.
This discussion becomes especially relevant at a time when US President Donald Trump’s latest attempt to target countries with ‘discriminatory’ digital taxes, legislation, or regulations (such as the EU) includes imposing higher tariffs and US tech export restrictions. Trump has urged countries to “show respect to America and our amazing tech companies," and we wonder whether it's now also time for a more widespread discussion around respecting and upholding human rights, safety and security – which, in an era of rapid technological change, is growing increasingly vital.