top of page

Learning resource: 
What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it is used?

Introduction 

Like the internet, artificial intelligence (AI) is designed to transcend the borders of its development origins. With its tremendous potential to transform business operations, government functions and industries around the world, especially following the recent progress in generative models, AI is increasingly adopted by business and public services worldwide, influencing the way people work and live everyday, everywhere. 

 

As AI companies venture into new markets, they face challenges in regions where local norms and values may starkly contrast with their own. Controversial topics in certain societies, such as women's rights and LGBTQ+ rights, are handled by AI models primarily trained on English language datasets and developed in Western states. 

 

Overlooking or mishandling cultural nuances can hinder user experience, result in legal risks, exacerbate contentious issues, and possibly even putting people in danger of harm. However, how much AI should adapt to local cultures is also a challenging question, as international understandings and approaches to cultural issues vary drastically and there are no universally accepted answers for many issues. "Localization" can cause AI to mirror and reinforce contextually entrenched biases as the models learn from the data they are trained on. It could also propagate situations or approaches in one geographical or cultural context that may be shunned by the international community, like violations of human rights or treatments of certain groups. However, the question remains: which context, if any, should/could be used as the guiding ‘correct’ context?

 

This consultation seeks to address this dilemma by utilising collective wisdom from around the world on principles that should guide AI's adoption across different cultures and contexts, including a diverse range of voices that can bring a mix of perspectives and experiences.


 

Current Practices

Popular AI products currently often lack cultural sensitivity and sophistication. Scholars have discovered that models like ChatGPT align more closely with American culture and struggle to adapt to other cultural nuances[1]. Some believe this misalignment is attributed to an over-reliance on English language datasets, which either diminish cultural differences or amplify Western biases. The race to secure first move advantage in the global market and maximise profit can also deter companies from adopting a nuanced approach in AI development and deployment[2] due to its complex and possibly costly considerations.

 

Beyond technical and commercial considerations, there are moral reasons to be cautious about localization. While AI tends to benefit its controllers, in societies riddled with socio-economic disparities and issues like gender inequality, AI can exacerbate possibly harmful conditions for marginalised communities[3]. In some autocratic states, AI has been utilised to further illiberal political objectives and suppress citizens' rights. 

 

Existing efforts and debates

The potential risks require a more considered approach to AI development and deployment in various contexts. In recent years, corporations, states (e.g. EU, UK, US), and civil society (e.g. UN, OECD) have formulated principles and guidelines for responsible AI use. These guidelines, however, often remain broad and overlook cultural nuances.

 

Currently, most discussions and debates occur in Western-centric environments, with participants often adopting associated frameworks and methods. For one, the human rights-centric approach to AI governance is widely accepted, as referenced in major UN and EU AI documents. This approach seeks to provide universally applicable standards across all cultures[4]. However, it does not recognise the contested nature of human rights and their wide interpretation variations[5].  Some nations may want access to this technology and the possible benefits it can provide, but not be comfortable with the social norms or legal practices imbedded into it [8]. Therefore, whose standards AI should reflect in the global marketplace remains a critical and pressing question to answer.

 

For this reason, the practicality of these generalised principles is unclear. For instance, the "human in the loop" guideline, widely accepted  in OECD countries, assumes that human judgement can enhance AI accuracy and fairness. Yet, research suggests that marginalised communities, already sceptical of human decision-makers due to their marginalisation experience, do not find this intervention trust-enhancing[6]. In nations like India, where data is historically unreliable due to socio-economic factors, emphasising "data equity" or algorithmic fairness might not yield the desired results[7] as the data is already skewed.

 

Therefore, addressing AI adoption in diverse cultural contexts is a highly complex issue. It  requires stronger and more in-depth cultural engagement of companies in the regions they operate, with meaningful inputs from local stakeholders, especially those more marginalised groups. If there is any hope of a universally applicable set of norms or rules, then incredibly diverse and representative processes are probably required. Businesses and developers can struggle to come up with a comprehensive AI strategy in addressing sensitive topics in a given cultural context whilst maintaining consistency and upholding their own values and can therefore result to a more 'reactive' stance, dealing with issues as they arise. Such a persistent crisis management mode may, however, not be ideal in handling these delicate situations, especially given the potential for reputational and legal ramifications from failures.  

 

While expecting a one-size-fits-all solution may be unrealistic, integrating cultural considerations into AI ethics and governance discussions is an essential starting point.

 

Some possible controversial topics to consider when taking part in this consultation: 

  1. Gender rights 

  2. LGBTQ+ rights

  3. Indigenous peoples’ rights 

  4. Persecuted minorities' rights

  5. Freedom of expression

  6. Undemocratic practices

  7. Geopolitics: competing rhetorics on historical issues   

  8. Religious norms

  9. Cultural norms

  10. Economic practices

[1] Cao, Y. et al. (2023) Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study. [online]. Available from: http://arxiv.org/abs/2303.17466 (Accessed 18 August 2023).

[2] Yeung, K. et al. (2019) AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing. [online]. Available from: https://papers.ssrn.com/abstract=3435011 (Accessed 15 August 2023).

[3]  Gupta, M. et al. (2022) Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter? Information Systems Frontiers. [Online] 24 (5), 1465–1481.

[4] Yeung, K. et al. (2019) AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing. [online]. Available from: https://papers.ssrn.com/abstract=3435011 (Accessed 15 August 2023).

[5] Wong, P.-H. (2020) Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics and Governance of AI. Philosophy & Technology. [Online] 33 (4), 705–715.

[6] Lee, M. K. & Rich, K. (2021) ‘Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust’, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. [Online]. 6 May 2021 Yokohama Japan: ACM. pp. 1–14. [online]. Available from: https://dl.acm.org/doi/10.1145/3411764.3445570 (Accessed 17 August 2023).

[7] Sambasivan, N. et al. (2021) Re-imagining Algorithmic Fairness in India and Beyond. [online]. Available from: http://arxiv.org/abs/2101.09995 (Accessed 17 August 2023).

[8] Comments from Marius George Linguraru, DPhil MA MSc, Professor of Radiology and Pediatrics at George Washington University

*Special thanks to Dr Marius George Linguraru and the other experts who advised on this learning resource.

Further reading

Agrawal, V. et al. (2023) From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts. [online]. Available from: http://arxiv.org/abs/2307.15452 (Accessed 15 August 2023).

Anacleto, J. et al. (2006) ‘Can Common Sense uncover cultural differences in computer applications?’, in Max Bramer (ed.) Artificial Intelligence in Theory and Practice. IFIP International Federation for Information Processing. [Online]. 2006 Boston, MA: Springer US. pp. 1–10. https://link.springer.com/chapter/10.1007/978-0-387-34747-9_1

Gonzalez-Jimenez, H (2021) How cultural diversity and awareness can create a more ethical AI. LSE Business Review [online]. Available from: https://blogs.lse.ac.uk/businessreview/2021/06/04/how-cultural-diversity-and-awareness-can-create-a-more-ethical-ai/ (Accessed 15 August 2023).

Ess, C. (2006) Ethical pluralism and global information ethics. Ethics and Information Technology. [Online] 8 (4), 215–226. https://link.springer.com/article/10.1007/s10676-006-9113-3

Gupta, M. et al. (2022) Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter? Information Systems Frontiers. [Online] 24 (5), 1465–1481. https://link.springer.com/article/10.1007/s10796-021-10156-2

Lee, K. & Joshi, K. (2020) Understanding the Role of Cultural Context and User Interaction in Artificial Intelligence Based Systems. Journal of Global Information Technology Management. [Online] 23 (3), 171–175. https://www.tandfonline.com/doi/full/10.1080/1097198X.2020.1794131

Lee, M. K. & Rich, K. (2021) ‘Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust’, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. [Online]. 6 May 2021 Yokohama Japan: ACM. pp. 1–14. [online]. Available from: https://dl.acm.org/doi/10.1145/3411764.3445570 (Accessed 17 August 2023).

ÓhÉigeartaigh, S. S. et al. (2020) Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance. Philosophy & Technology. [Online] 33 (4), 571–593.

Prabhakaran, V. et al. (2022) Cultural Incongruencies in Artificial Intelligence. https://arxiv.org/abs/2211.13069

Robinson, S. C. (2020) Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society. [Online] 63101421. https://www.sciencedirect.com/science/article/pii/S0160791X20303766

Sambasivan, N. et al. (2021) Re-imagining Algorithmic Fairness in India and Beyond. [online]. Available from: http://arxiv.org/abs/2101.09995 (Accessed 17 August 2023).

Wong, P.-H. (2020) Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics and Governance of AI. Philosophy & Technology. [Online] 33 (4), 705–715.https://link.springer.com/article/10.1007/s13347-020-00413-8#:~:text=Deference%20to%20cultural%20differences%2C%20therefore,are%20applicable%20and%20enforceable%20across

Yeung, K. et al. (2019) AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing. [online]. Available from: https://papers.ssrn.com/abstract=3435011 (Accessed 15 August 2023).

bottom of page