Founder of TrainingCQ Arona Maskil

Opinion
The human factor: Culture's role in AI evolution

"The relationship between culture and AI development is undeniably complex and vital. Understanding cultural impacts is imperative for developing ethical, practical, and widely accepted AI technologies," writes Arona Maskil

If artificial intelligence aspires to benefit all of humanity, how can it achieve this if it is created and shaped by only a specific segment of humanity? How can artificial intelligence be expected to offer optimal solutions for our most critical global challenges when it lacks an understanding of the diverse impacts of its solutions and is developed with only a partial perspective of humanity in mind?
Imagine an artificial intelligence created from data models assembled only from select cultures based on specific criteria. These cultures reflect perspectives, and the individuals contributing to the data may not be fully aware of their cultural biases, attitudes, and beliefs. How would the decisions made by this AI differ from those produced by an AI created from data gathered from a different culture, reflecting a different perspective, and contributed by individuals fully aware of their own cultural biases, attitudes, and beliefs?
1 View gallery
Arona TrainingCQ
Arona TrainingCQ
Founder of TrainingCQ Arona Maskil
(Photo: Alona Art Photography)
AI can transform industries, from healthcare and finance to transportation and entertainment. However, it's crucial not to get carried away by this excitement and, instead, critically evaluate the potential risks and challenges associated with AI adoption. Lately, much has been discussed about AI's ethical and societal implications. This includes concerns about job displacement, biases in algorithms, privacy issues, and the potential for misuse of AI technologies.
Bias in Algorithms
Many AI models are developed with a focus on technical performance metrics rather than cultural sensitivity or inclusivity. AI algorithms are trained on datasets that may not fully represent the diversity of human experiences and cultures. Moreover, the AI system may not adequately understand or account for cultural differences if the training data predominantly reflects specific demographics or cultural perspectives. Even if developers attempt to mitigate biases in AI systems, unintentional biases can still be encoded in the algorithms or the data used for training. These biases can lead to disparities in how the AI system interacts with different cultural groups. Lastly, the teams responsible for developing AI systems may lack diversity regarding cultural backgrounds and perspectives. They can result in oversights or blind spots when considering the impact of AI on different cultures.
Complexity of Cultural Factors
Cultural differences' complexities pose a significant challenge in effectively integrating them into AI systems. Nuances in language, social norms, and historical contexts are intricate and not easily quantifiable or programmable, making capturing and incorporating these aspects into AI systems difficult.
According to Techopedia, in 2024, the USA will be among the top four out of 10 countries leading in AI research and technology. MacroPolo found that almost 60% of "top-tier" AI researchers work for American universities and companies. Mirae Assets suggests that $249 billion in private funding has been raised.
The next most significant contributor to AI research is China, which has 11% of top-tier AI researchers, 232 AI-related investments in 2023, and, according to Mirae Assets, raised $95 billion in private investment between 2022 and 2023.
The UK comes in third and has remained one of the leading contributors to the AI race. In fact, according to the International Trade Administration (ITA), the UK is the third largest AI market in the world after the USA and China, with a current valuation of $21 billion, which it estimates will reach $1 trillion by 2035.
Techopedia placed Israel fourth because its local tech scene has established itself at the forefront of AI development. Between 2013 and 2022 (Mirae Asset), the country achieved $11 billion in private investment, the fourth highest in the world.
The global AI ecosystem competes to establish standards and define regulations governing AI operations locally and internationally. There is a legitimate concern about cultural bias in AI models developed in specific regions that do not represent global cultural diversity. All else being equal, AI developed in a particular location will naturally be best suited to serve the needs of that specific location. AI from MIT exudes an unmistakably "American" essence, while AI from the British ecosystem radiates a distinct "properly English" aura. This highlights the unique cultural influences on the development of AI and raises questions about the extent of a globally collaborative, shared enterprise.
To understand the impact of culture on AI development, we need to consider the following aspects:
  • Values and Ethical Standards - Different cultures have varying ethical norms and values that impact AI development and deployment. Western cultures prioritize individual privacy and data protection, leading to regulations like the European GDPR. In contrast, some Asian cultures prioritize collective benefits over personal privacy, influencing more relaxed data-sharing practices.
  • Regulatory Environment - Cultural attitudes toward regulation and government intervention can influence AI's legal framework. For instance, the United States may prefer a more market-driven approach with minimal government oversight, promoting rapid innovation but possibly lacking sufficient regulation. Conversely, countries with robust regulatory traditions, like Germany, may enforce more comprehensive guidelines to ensure ethical AI deployment.
  • User Interaction and Design Preferences - Cultural differences impact how people engage with technology. For example, AI systems may be programmed to offer clear and direct responses in cultures that prioritize direct communication. On the other hand, in cultures that value subtlety and indirect communication, AI interfaces might be designed to be more nuanced and context-sensitive.
  • Adoption and Trust - Acceptance and trust in AI technologies vary across cultures. AI adoption might be rapid and widespread in societies with high levels of trust in technology and government. In contrast, there may be greater resistance in cultures with skepticism towards new technologies, necessitating more robust efforts to build public trust.
  • Language and Communication - Language plays a critical role in AI, especially in natural language processing (NLP). Creating AI systems that can comprehend and analyze various languages and dialects necessitates cultural and linguistic understanding. Moreover, idiomatic expressions, humor, and context can differ significantly across cultures, posing a challenge for developing universally effective AI communication tools.
  • Collaborative Practices - Different cultural norms regarding collaboration and competition can influence the management of AI development projects. Some cultures prioritize collaborative efforts and knowledge sharing, which can speed up innovation. In contrast, others may emphasize competitive advantage, potentially resulting in isolated developments and slower overall progress.
  • Global Influence and Diversity - Cultural diversity within AI development teams can lead to the creation of more innovative and inclusive technologies. A diverse team brings a wide range of perspectives and experiences, which can help identify and mitigate biases, understand user needs across different cultures, and create AI systems that are more globally applicable.
To summarize, the relationship between culture and AI development is undeniably complex and vital. Understanding cultural impacts is imperative for developing ethical, practical, and widely accepted AI technologies. Integrating cultural considerations is essential for creating technically advanced, socially responsible, and culturally sensitive AI systems. This demands a concerted effort to promote diversity, enhance data collection practices, and effectively mitigate biases in AI systems. Collaborating across disciplines with cultural studies, anthropology, and sociology experts is crucial for informing the design of culturally sensitive AI solutions.

Arona Maskil is a corporate cross-cultural business consultant with extensive experience in U.S., Israeli, and global business cultures. Founder of TrainingCQ, she specializes in cross-cultural and virtual communication consultancy and has over 25 years of experience in culturally related issues.