By Jonathan Rudich • August 15, 2025 • AI Ethics / AI Strategy

The UNESCO Recommendation: A Strategic Blueprint for Trustworthy AI Transformation

Technology alone won't ensure AI success. UNESCO's ethics framework provides a strategic blueprint for building trustworthy AI systems grounded in human rights and governance.

AI EthicsAI Strategy
The UNESCO Recommendation: A Strategic Blueprint for Trustworthy AI Transformation

Introduction: The New Competitive Imperative—Moving from AI Adoption to AI Trust

Artificial intelligence (AI) represents a paradigm shift in technology, a general-purpose force reshaping how we work, interact, and live at a pace unseen since the deployment of the printing press six centuries ago. For organizations, the race to harness AI's potential for efficiency, innovation, and growth is well underway.

However, a narrow focus on technological capability alone is a perilous strategy. The next, more durable frontier of competitive advantage lies not in merely adopting AI, but in earning and maintaining stakeholder trust through its responsible and ethical application. Without robust ethical guardrails, AI systems risk becoming powerful vectors for amplifying real-world biases, fueling societal divisions, and threatening fundamental human rights and freedoms. These risks are not abstract; they manifest as profound legal, reputational, and operational liabilities for businesses.

In this high-stakes environment, the era of self-regulation, often prioritizing commercial and geopolitical objectives over people, is definitively ending. A new global consensus is emerging, one that demands the establishment of institutional and legal frameworks to govern AI for the public good. At the forefront of this global movement is the United Nations Educational, Scientific and Cultural Organization (UNESCO). Its Recommendation on the Ethics of Artificial Intelligence, adopted by all 193 Member States in November 2021, stands as the preeminent global framework for building this trust-based advantage. This document is far more than a set of high-minded ideals; it is a strategic blueprint for organizations navigating their AI transformation.

Blog image

The unanimous adoption of the Recommendation is a powerful leading indicator of future national and regional legislation. The text explicitly recommends that Member States apply its provisions by taking all necessary steps, "including whatever legislative or other measures may be required", to give them effect. This signals an inevitable global tide of regulation.

Organizations that proactively align with this framework today are not simply adopting best practices; they are pre-emptively complying with the future regulatory landscape. This confers a significant first-mover advantage, reducing future compliance costs, minimizing the risk of expensive and disruptive retrofitting of non-compliant systems, and cementing the organization's reputation as a trusted, forward-thinking leader in the digital economy.

Ultimately, embracing the UNESCO framework requires a fundamental cultural shift within an organization. The traditional engineering and product development mindset, often focused on technical feasibility—"Can we build this?"—is no longer sufficient. The Recommendation, by centering its entire structure on human rights, dignity, and well-being, forces a transition to a more critical question: "Should we build this, and if so, how can we do so responsibly?"

This evolution from a purely technical to a socio-technical perspective is the most crucial and demanding aspect of a mature AI transformation. Organizations that successfully navigate this cultural change will not only innovate more responsibly but will also build more resilient products and avoid the kind of high-profile ethical failures that have inflicted lasting damage on other firms. This report provides a comprehensive analysis of the UNESCO Recommendation and a practical guide for leadership to architect this ethical advantage.

1. Decoding the Global Consensus: An Overview of the UNESCO Recommendation on the Ethics of AI

Blog image

To leverage the UNESCO Recommendation as a strategic asset, leaders must first understand its structure and logic. It is an interconnected system designed to guide action from high-level values down to specific policy implementation.

1.1. A Landmark Achievement: The First Global Standard-Setting Instrument

The Recommendation represents a historic milestone: the first-ever global standard-setting instrument on the ethics of AI. Its adoption by 193 countries establishes a universal normative framework, providing authoritative guidance for policymakers, corporations, and all other stakeholders involved in the AI lifecycle. The document's explicit purpose is to lay the foundations for future regulatory instruments, ensuring that as AI technology evolves, its governance does as well.

Its scope is intentionally comprehensive, addressing the ethical implications of AI across UNESCO's mandate areas of education, science, culture, and communication. Critically, it provides ethical guidance to all "AI actors," a term that encompasses governments, private sector companies, researchers, and civil society, making it directly relevant to the enterprise. To ensure its longevity, the Recommendation adopts a "dynamic understanding of AI," defining it broadly as systems with the capacity to process data in a way that resembles intelligent behavior. This flexible definition prevents the framework from becoming obsolete as technology advances, making it a future-proof guide for long-term strategic planning.

1.2. The Ethical Compass: The Four Core Values

At the heart of the Recommendation are four core values. These serve as the ultimate ethical compass, the fundamental tests against which any AI system or application must be measured. They represent the "why" behind the framework, defining the overarching goals that ethical AI must serve.

  1. Respect, Protection, and Promotion of Human Rights, Fundamental Freedoms, and Human Dignity: This is the non-negotiable cornerstone of the entire framework. It asserts that AI must serve humanity, and that fundamental rights are inviolable.
  2. Environment and Ecosystem Flourishing: This value establishes a commitment to ecological responsibility. AI systems should not harm the environment and, where possible, should be leveraged to promote sustainability and address environmental challenges.
  3. Ensuring Diversity and Inclusiveness: The framework demands that AI actively promotes diversity and inclusion. Its benefits must be shared equitably across all populations, and systems must be designed to avoid perpetuating or exacerbating discriminatory biases against any group.
  4. Living in Peaceful, Just, and Interconnected Societies: This value underscores the role of AI in fostering social harmony, justice, and equity, ensuring that the technology contributes to a more stable and interconnected world.
Figure 1: The Four Core Values of the UNESCO AI Ethics Framework
Figure 1: The Four Core Values of the UNESCO AI Ethics Framework

1.3. The Foundational Pillars: The Ten Guiding Principles

If the values are the "why," the ten guiding principles are the "what." They translate the abstract values into actionable, human-rights-centered guidelines for AI governance and system design. These principles define the essential attributes of a trustworthy AI system.

  1. Proportionality and Do No Harm: The use of AI must be appropriate and necessary to achieve a legitimate aim. It must not venture into excessive or dangerous applications that go beyond the stated purpose. This requires rigorous risk assessment to prevent harm. For businesses, this principle guards against "technology for technology's sake," ensuring that AI solutions are fit-for-purpose and do not introduce disproportionate risk.
  2. Safety and Security: AI actors must proactively identify, address, and mitigate unwanted harms (safety risks) and vulnerabilities to attack (security risks) throughout the entire AI lifecycle. This is a core function of enterprise risk management.
  3. Fairness and Non-Discrimination: AI systems must be designed and deployed to actively promote social justice, fairness, and non-discrimination. This involves tackling digital divides and minimizing bias to ensure inclusive access to AI's benefits. This is critical for legal compliance, brand reputation, and market access.
  4. Sustainability: The full societal, cultural, economic, and environmental impacts of AI systems must be holistically assessed and managed for long-term sustainability, aligning with goals like the UN's Sustainable Development Goals. This directly connects AI governance to broader corporate ESG (Environmental, Social, and Governance) objectives.
  5. Right to Privacy and Data Protection: Privacy, a fundamental human right, must be protected and promoted throughout the AI lifecycle. This necessitates the establishment of adequate data protection frameworks and robust data governance mechanisms. This is a foundational requirement for legal compliance with regulations like the GDPR and for building customer trust.
  6. Human Oversight and Determination: Member States and organizations must ensure that AI systems do not displace ultimate human responsibility and accountability. Humans must always retain the ability to oversee, question, override, or shut down an AI system, especially in high-stakes situations. This principle is a crucial safeguard against catastrophic failure and loss of control.
  7. Transparency and Explainability (T&E): The ethical deployment of AI depends on stakeholders being able to understand how systems operate and why they produce certain outcomes. The level of T&E should be appropriate to the context, as there can be tensions between transparency and other principles like privacy or security. T&E is the bedrock of accountability and trust.
  8. Responsibility and Accountability: Clear accountability mechanisms must be established. AI actors are responsible and accountable for the outcomes of their systems. This requires auditable and traceable systems, along with mechanisms for oversight and due diligence.
  9. Awareness and Literacy: Public and employee understanding of AI technologies and data should be promoted through accessible education and training. An informed workforce and citizenry are the first line of defense against the misuse of AI.
  10. Multi-stakeholder and Adaptive Governance & Collaboration: AI governance must be inclusive, involving diverse stakeholders from government, the private sector, academia, and civil society. Furthermore, governance structures must be flexible and adaptive to keep pace with rapid technological development. This approach prevents organizational groupthink and ensures the continued relevance of ethical frameworks.

1.4. From Principles to Practice: The Eleven Policy Action Areas

What makes the UNESCO Recommendation exceptionally applicable and practical are its eleven extensive Policy Action Areas. These areas represent the "how" and "where" of implementation, providing concrete domains in which organizations and governments can translate the values and principles into tangible action. For businesses embarking on AI transformation, several of these areas are of immediate strategic importance:

It is essential to view these three components—Values, Principles, and Policy Areas—not as a menu of disconnected options but as a deeply integrated and logical system:

The Values (the "Why") establish the ultimate ethical purpose.

The Principles (the "What") define the necessary characteristics of a system that can achieve that purpose.

The Policy Areas (the "How" and "Where") provide the concrete domains for implementing the principles and measuring their adherence against the values.

For instance, the Value of "Ensuring Diversity and Inclusiveness" is operationalized through the Principle of "Fairness and Non-Discrimination." This principle is then applied within the Policy Area of "Economy and Labor" (e.g., in an AI-powered hiring tool) and its potential risks are proactively evaluated using an "Ethical Impact Assessment."

Understanding this nested, hierarchical logic is the key to implementing the framework holistically and effectively, rather than through a piecemeal approach that is destined to fail.

2. The Strategic Value Proposition: Why the UNESCO Framework is a Non-Negotiable for Your Business

Blog image

While the ethical case for responsible AI is clear, the business case is equally compelling. Adopting the UNESCO Recommendation is not an act of corporate altruism; it is a strategic imperative that de-risks investment, builds tangible assets of trust, and positions an organization for long-term, sustainable success in the AI era.

2.1. De-Risking Your AI Investment: Mitigating Legal, Reputational, and Operational Peril

The development and deployment of AI systems introduce a new spectrum of significant risks. The UNESCO framework provides a comprehensive and globally endorsed model for identifying, assessing, and systematically mitigating these risks throughout the AI lifecycle.

2.2. Building an Unshakeable Foundation of Trust with Customers, Employees, and Regulators

In the digital economy, trust is not a soft metric; it is a hard asset that drives customer loyalty, employee engagement, investor confidence, and regulatory goodwill. The UNESCO framework is a practical blueprint for architecting this trust at every level of the organization.

2.3. The UNESCO Business Council: A Platform for Collaboration and Co-creation

Crucially, organizations are not expected to undertake this journey in isolation. UNESCO has actively engaged the private sector by establishing the UNESCO Business Council for the Ethics of AI, a vital platform for collaboration, exchanging best practices, and collectively shaping the future of AI governance.

Co-chaired by industry leaders such as Microsoft and Telefonica, the Council provides a forum for companies to work directly with UNESCO on the practical implementation of the Recommendation. Its activities include co-designing and refining critical tools like the Ethical Impact Assessment (EIA), providing industry insights to inform the development of intelligent regional regulations, and fostering a competitive environment where ethical practices are a shared standard. Membership is open to any private sector entity willing to endorse the Recommendation and actively participate in its mission. This initiative provides an unparalleled opportunity for businesses to not only learn from their peers but also to contribute to the very standards by which they will be governed.

The existence of these practical implementation tools and collaborative bodies underscores a critical strategic point. It is no longer sufficient for a company to simply publish a set of internal ethics principles and declare its commitment to responsible AI. The new, higher standard of care is the ability to demonstrate adherence to a comprehensive, global framework. This means moving beyond claims to evidence. The organizations that will win the trust of stakeholders are those that can "show their work"—by conducting and documenting Ethical Impact Assessments, by participating in multi-stakeholder dialogues like the Business Council, and by being transparent about their governance processes and the steps they are taking to align with global norms. This demonstrable commitment to ethical practice is what builds deep, defensible, and lasting trust.

3. The Implementation Blueprint: Architecting an Enterprise AI Governance Framework

Blog image

Translating the UNESCO Recommendation from principle to practice requires a deliberate and structured approach to governance. An effective AI governance framework is not a single policy document but a holistic system encompassing people, processes, and technology, designed to embed ethical considerations into the very fabric of the organization.

3.1. The People Pillar: Establishing Human-Centric Oversight

Blog image

Effective AI governance is, at its core, a human endeavor. Technology alone cannot ensure ethical outcomes. Success depends on clear leadership, dedicated and empowered oversight bodies, and a well-educated workforce.

3.1.1. The Role of Leadership in Cultivating an Ethical AI Culture

Blog image

The ultimate responsibility for an organization's ethical posture rests with its senior leadership. The CEO and C-suite must set a clear and unambiguous tone from the top, championing the adoption of the AI ethics framework as a strategic priority. This goes beyond mere endorsement. Leadership must actively invest in the necessary resources, including comprehensive training programs; establish and enforce clear internal policies; and, most importantly, foster a culture of psychological safety and open dialogue.

Employees at all levels must feel empowered to raise ethical questions and dilemmas without fear of reprisal. This commitment from leadership is the essential catalyst for embedding ethics into the corporate DNA and moving from a culture of compliance to one of conscience.

3.1.2. Chartering Your AI Ethics Committee (or Review Board)

The central operational hub for AI governance is a dedicated AI Ethics Committee or Review Board. This body is not a philosophical debating society; it is a critical risk management function with a clear mandate to provide strategic oversight, review high-impact AI projects, and ensure all AI initiatives align with the organization's ethical principles and policies.

3.1.3. Building Organizational Capacity: A Strategy for Enterprise-Wide AI Ethics Training

An empowered committee and a strong leadership tone are necessary but not sufficient. The entire organization must be equipped with the knowledge to use AI responsibly. A comprehensive training strategy is therefore a critical pillar of governance.

3.2. The Process Pillar: Embedding Ethics into the AI Lifecycle

The second pillar of governance involves creating formal processes and policies that embed ethical considerations into every stage of the AI lifecycle, from ideation and data collection to deployment, monitoring, and eventual retirement.

3.2.1. Proactive Risk Management with the Ethical Impact Assessment (EIA)

The Ethical Impact Assessment (EIA) is a core procedural tool mandated by the UNESCO Recommendation. It is a structured, proactive process designed to force a systematic consideration of a potential AI project's impact before it is built and deployed. The EIA's purpose is to identify, assess, and plan mitigation for potential negative impacts on human rights, fairness, privacy, the environment, and the other core values of the framework. Organizations should adopt or adapt UNESCO's formal EIA methodology and make its completion a mandatory gateway for any new AI project, particularly those classified as high-risk. The EIA should not be treated as a one-time, check-the-box exercise but as a living document that is revisited and updated as the system evolves and new impacts are understood.

3.2.2. Implementing Robust Data Governance for Privacy and Security

Since data is the lifeblood of AI, its ethical and secure management is a non-negotiable foundation of AI governance. A robust data governance program is essential for ensuring the quality, integrity, privacy, and security of the data used to train and operate AI systems.

3.2.3. Crafting and Enforcing an Internal AI Ethics Policy

While the UNESCO framework provides the "why" and "what," a formal internal AI Ethics Policy provides the "how" for employees on a day-to-day basis. This document translates the high-level principles into concrete, enforceable rules of conduct.

A common fear among business leaders is that this level of governance will stifle the speed and creativity essential for innovation. However, this perspective misreads the nature of modern technological development. A well-designed AI governance framework does not act as a brake but as a set of guardrails that enables and accelerates responsible innovation. By providing development teams with clear ethical boundaries, risk assessment processes, and data handling protocols, governance gives them the confidence and psychological safety to experiment and build safely. It prevents costly ethical missteps and technical debt that can derail projects late in their lifecycle, forcing expensive redesigns or outright cancellation. By building stakeholder trust from the outset, it also creates greater organizational buy-in for deploying new AI systems.

Therefore, leaders should reframe AI governance not as a bureaucratic cost center, but as a strategic investment that de-risks, accelerates, and improves the outcomes of the entire AI portfolio.

4. Deep Dive—Operationalizing Key Principles in High-Stakes Environments

While all ten UNESCO principles are important, some present greater practical challenges for organizations. This section provides a granular, operational deep dive into two of the most critical and complex principles: Transparency and Explainability, and Fairness and Non-Discrimination, using high-stakes business contexts as case studies.

4.1. Illuminating the Black Box: A Guide to Achieving Meaningful Transparency and Explainability (T&E)

Blog image

The principles of Transparency and Explainability (T&E) are the bedrock of trust and accountability in AI systems. Without them, users, customers, and regulators are left in the dark, unable to understand or challenge algorithmic decisions. For businesses, mastering T&E is not merely a technical challenge; it is a fundamental communication and trust-building imperative.

Distinguishing the Concepts

While often used interchangeably, transparency and explainability are distinct but related concepts.

The Black Box Problem

Achieving meaningful explainability is one of the most significant challenges in modern AI.

Many of the most powerful and accurate models, particularly those based on deep learning and neural networks, are inherently complex and opaque. Their internal decision-making processes can involve millions or billions of parameters interacting in non-linear ways, making it exceedingly difficult even for their creators to trace exactly why a specific input led to a specific output. This "black box" nature creates a fundamental tension between model performance and interpretability, posing a major ethical and operational hurdle.

Practical Implementation for Customer-Facing Models

Despite these challenges, organizations can and must take concrete steps to deliver meaningful T&E, especially in high-stakes, customer-facing scenarios.

  1. Proactive and Clear Notification: The first step is always transparency. Organizations must establish an unwavering policy to clearly and simply inform users whenever they are interacting with an AI system or when a decision that affects their rights or opportunities was significantly informed by one. This includes customer service chatbots, personalized recommendation engines, and automated application screening processes.
  2. Plain-Language Explanations for High-Stakes Decisions: For decisions with significant consequences—such as the denial of a loan application, an insurance claim, or a job interview—a simple notification is insufficient. The organization must be prepared to provide a plain-language explanation of the decision. This explanation should avoid technical jargon and focus on the primary factors that influenced the outcome. For example, the fintech company ZestFinance, which uses AI for creditworthiness assessment, provides applicants with detailed reasons for its lending decisions, a practice that both builds customer trust and empowers applicants to improve their financial standing.
  3. Building Trust Through Control and Recourse: Meaningful T&E is not just about one-way information delivery; it is about empowering the individual. The public should have the opportunity to seek clarification from a responsible human agent and understand the basis of any decision that affects them. Providing clear channels for appeal, review, and correction of AI-driven decisions is essential for building trust and ensuring accountability. This level of transparency and engagement reduces customer skepticism, enhances satisfaction, and ultimately drives wider and more confident adoption of AI-powered services.

4.2. Upholding Fairness and Non-Discrimination: A Case Study in AI-Powered Recruitment

Hiring is one of the most high-stakes and ethically fraught domains for AI application. An individual's access to employment is a fundamental economic right, and the use of biased AI tools in recruitment can perpetuate systemic inequalities, damage brand reputation, and create significant legal liability under anti-discrimination laws. This makes AI-powered hiring a critical test case for an organization's commitment to the principle of Fairness and Non-Discrimination.

The Root Causes of Algorithmic Bias in Hiring

Understanding how bias enters hiring algorithms is the first step toward mitigating it. The causes are often subtle and systemic.

A Multi-Pronged Mitigation Strategy

Given the complexity of bias, there is no single silver-bullet solution. Organizations must adopt a multi-pronged strategy that combines technical interventions with robust human oversight.

5. Charting Your Course: A Phased Roadmap to Ethical AI Maturity

Blog image

Implementing the UNESCO framework is a strategic journey, not a one-time project. It requires a phased, systematic approach to build foundational capabilities, develop and integrate governance structures, and foster a culture of continuous improvement. A phased roadmap allows an organization to manage this complex transformation in a deliberate and achievable manner, ensuring that each stage builds upon the last.

This approach allows for the systematic development of organizational capacity, the thoughtful integration of new processes, and the cultivation of an ethical AI culture over time. The following table and outline synthesizes the guidance from the UNESCO Recommendation and best practices in corporate governance into a tangible, actionable plan. It is designed to be used by leadership to structure their implementation efforts, assign responsibilities, allocate resources, and track progress toward achieving ethical AI maturity.

Table 2: Efforts required across the AI Ethics implementation phases
Table 2: Efforts required across the AI Ethics implementation phases

Phase 1: Foundation & Assessment (Months 1–3)

  1. Secure leadership buy-in; C-suite formally endorses UNESCO framework as strategic priority.
  2. Form a cross-functional steering committee with executive sponsorship.
  3. Conduct AI inventory and high-level risk assessment (focus on critical areas like hiring, customer data).
  4. Launch mandatory enterprise-wide AI literacy training covering fundamentals and UNESCO principles.
  1. C-Suite, Board of Directors, Chief Legal Officer, Chief Information/Technology Officer, Head of Risk Management.
  1. Executive sponsorship and strategic communication.
  2. Initial budget for training platforms and content.
  3. Dedicated time from senior leaders for steering committee work.
  1. AI risk register created and prioritized.
  2. Steering committee charter approved.
  3. ≥80% of employees complete foundational AI literacy training.

Phase 2: Framework Development & Integration (Months 4–9)

  1. Formally charter the AI Ethics Committee (clear authority, membership, procedures).
  2. Draft and ratify internal AI Ethics Policy with stakeholder feedback.
  3. Develop and pilot Ethical Impact Assessment (EIA) process on 1–2 medium/high-risk projects.
  4. Develop and deploy advanced, role-specific training for key groups (data scientists, HR, product managers).
  1. AI Ethics Committee, HR, Legal & Compliance, Data Science & Engineering, Product Management, Internal Communications.
  1. Time allocation for committee duties.
  2. Legal/expert review of policy and EIA process.
  3. Investment in learning management systems for role-specific training.
  1. AI Ethics Committee operational with regular meetings.
  2. Internal AI Ethics Policy published and accessible.
  3. At least two EIAs completed and reviewed.
  4. Key high-risk department personnel complete role-specific training.

Phase 3: Scaling & Continuous Improvement (Months 10+)

  1. Make EIAs mandatory for all new high-risk AI projects.
  2. Implement continuous monitoring/auditing for deployed AI (drift, bias, security).
  3. Establish feedback channels for reporting ethical concerns; update policy and EIA regularly.
  4. Engage with external AI ethics ecosystem (e.g., UNESCO Business Council).
  1. All Business Units, Internal Audit, External Relations, Procurement, Customer Support.
  1. Budget for monitoring/auditing tools.
  2. Resources for managing feedback and policy updates.
  3. Resources for external engagement (fees, travel).
  1. 100% of new high-risk AI systems have documented/approved EIAs.
  2. Quarterly/semi-annual audit reports produced and reviewed by leadership.
  3. AI Ethics Policy reviewed/updated at least annually.
  4. Organization joins or actively participates in an external AI ethics body.

Conclusion: The Ethical Advantage in the Age of AI

Blog image

The relentless advance of artificial intelligence has fundamentally altered the strategic landscape for every organization. It has become unequivocally clear that ethical governance is no longer an academic discussion or a peripheral corporate social responsibility initiative; it is an urgent and central strategic priority.

The risks associated with ungoverned AI—legal jeopardy, reputational ruin, and operational failure—are too significant to ignore. In this complex and rapidly evolving environment, the UNESCO Recommendation on the Ethics of Artificial Intelligence offers a comprehensive, practical, and globally endorsed blueprint for navigating the path forward.

Organizations that choose to proactively and holistically embrace this framework will achieve far more than mere risk mitigation. By embedding the principles of fairness, transparency, accountability, and human dignity into their technology and culture, they will build a profound and durable foundation of trust with their customers, employees, investors, and regulators. This trust is the ultimate competitive advantage in the digital age. It fosters responsible innovation, unlocks greater value from AI investments, and builds the organizational resilience needed to thrive amidst technological disruption.

The journey to trustworthy AI is not simple.

It is a sustained commitment to embedding human values into the technological core of the enterprise. It is a challenge that demands decisive leadership, a shift in corporate culture, and the deliberate construction of new governance processes. The path requires investment, diligence, and a willingness to prioritize long-term integrity over short-term expediency. Yet, the rewards for those who undertake this journey are immense. The resilience, loyalty, and sustainable growth that stem from a deep commitment to ethical AI will define the leading and most respected organizations of the next decade.

Sources:

  1. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
  2. https://www.unesco.org/en/articles/unescos-recommendation-ethics-artificial-intelligence-key-facts
  3. https://www.caidp.org/events/unesco/
  4. https://www.unesco.org/en/artificial-intelligence
  5. https://www.dataguidance.com/opinion/international-unesco-recommendation-ethics
  6. https://unesdoc.unesco.org/ark:/48223/pf0000376713
  7. https://aiexponent.com/navigating-global-ai-ethics-a-practical-guide-to-the-unesco-recommendation/
  8. https://www.markovml.com/blog/ethical-ai
  9. https://www.ibm.com/think/topics/ai-governance
  10. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
  11. https://www.researchgate.net/publication/374234687_Ten_UNESCO_Recommendations_on_the_Ethics_of_Artificial_Intelligence
  12. https://tsaaro.com/blogs/ai-ethics-on-the-global-stage-a-deep-dive-into-unescos-recommendation-on-the-ethics-of-artificial-intelligence/
  13. https://osf.io/csyux/download
  14. https://unsceb.org/sites/default/files/2022-09/Principles%20for%20the%20Ethical%20Use%20of%20AI%20in%20the%20UN%20System_1.pdf