Responsible AI Governance: Enabling Innovation While Managing Risk
Summary Discussion on Global Trends in AI Governance: Evolving Country Approaches by Sharmista Appaya and Jeremy Ng (World Bank, 2025)
I recently read Global Trends in AI Governance: Evolving Country Approaches by Sharmista Appaya and Jeremy Ng (World Bank, 2025). It’s a dense, 105-page report - more of a mini-book - that maps how governments around the world are trying to govern AI.
The report is packed with takeaways, concrete examples, and practical recommendations. Below is a digestible, structured breakdown of the key insights presented in the mini-book. I found the content to be thought-provoking and I hope you feel the same way after reading. :)
“It took 75 years for fixed telephones to reach 100 million users globally. In contrast, mobile phones achieved this milestone in just 16 years, and the internet took only 7 years. The Apple store took 2 years, and strikingly, ChatGPT reached this number in a mere two months.”
(Appaya and Ng 2022, p.11)
This quote shows how fast AI is spreading. It is still hard to believe that this tech came just a couple years ago, and now we can hardly fathom any sphere of life without it. But from a public policy POV, this speed demands action. In other words, we need rules/regulations that can keep up - so that eventually AI helps more than it harms. (Ideally, it should not harm at all, but yeah, that’s the kind of world we live in. Sigh)
The Governance Imperative: Innovation with Guardrails
AI could add over $13 trillion to the global economy by 2030 (in percentage terms, that’s a 1.2% increase in world GDP!). It’s reshaping healthcare, education, transport, finance, public services, you name it.
But without oversight, it brings serious risks: bias, privacy breaches, false information, cyber threats, environmental strain, and social fallout.
We need clear governance that manages these risks—without choking off innovation.
Paths to AI Regulation
The report outlines four ways countries are (thinking about, as it is still super early) regulating AI:
1. Industry Self-Governance
Companies create their own ethical codes and advisory boards (yeah, no thank you, I would not count on it).
Examples: Microsoft Aether Committee, Google AI Principles, IBM’s AI Ethics Board, Bosch Ethical Guidelines, Partnership on AI.
Benefits: Can shape internal practices. Requires little government involvement.
Risks: Often vague and non-binding. No real public oversight. Not enough for high-risk sectors like finance or health. Limited input from the public. Easy for companies to make promises without action (“ethics-washing”). Mostly used by larger companies.
2. Soft Law
Non-binding international guidelines (too lenient imo, but the sad thing is civil servants usually lack the full breadth of knowledge to shape binding regulations, and may thus accidentally stifle innovation altogether).
Examples: OECD/G20 AI Principles, UNESCO Recommendations, G7 Principles, UN General Assembly AI Resolution.
Benefits: Can shape national policy, especially when backed by funding or technical help. Helps align countries.
Risks: Not legally binding. High-level and vague. May cause legal confusion.
3. Regulatory Sandboxes
Safe spaces where AI systems can be tested under regulator supervision (I find “sandbox” to be a fancy word for pilot projects, so yeah).
Examples: Colombia’s sandbox on privacy-by-design, Brazil’s AI/data protection pilot, Singapore’s AI Verify toolkit.
Benefits: Allows safe experimentation. Makes use of regulator expertise. Works well in early-stage AI markets.
Risks: Useful only for issues that can be solved through testing. Costly and resource-heavy. Can skew markets or give unfair advantage.
4. Hard Law and Regulation
Formal, binding laws with enforcement and penalties (I have read too much of public policy to know that minimization of market failures - adverse selection as well as moral hazard - requires strict implementation of law. But importantly, that law design needs to be well thought-put. And, also, we need money for implementation and monitoring and evaluation and tracking and stuff, which most governments sadly lack).
Examples: EU AI Act, Brazil AI Bill, Chile AI Bill, Council of Europe Framework Convention.
Benefits: Creates clear rules and enforcement. Protects users in high-risk cases. Can ban harmful AI outright.
Risks: Takes time and money to design. Can’t just copy laws from one country to another. Hard to balance long-term flexibility with current protections.
Key Risks That Must Be Managed
This was one of the best sections of the mini-book. Do read in detail if you have time. Below, I summarize but it may not do justice to the concepts fully.
1. Bias and Discrimination
Models trained on biased or foreign data can give unfair results. Gaps in digital access make this worse.
2. Lack of Explainability and Accountability
AI decisions must be explainable. Developers must be held responsible. Standards and audits are essential to keep public trust—especially in high-stakes areas like finance or government.
3. Privacy and Data Protection
Training AI requires huge amounts of data. This can lead to mass data collection, surveillance, and misuse. Personal data must be protected by law.
4. Environmental Impact
Large models need enormous computing power, water, and energy. Training them can emit hundreds of tons of CO₂. Even running them (inference) uses a lot of energy—Google says it accounts for 60% of its AI energy use.
5. Labor Market Disruption
AI threatens jobs. The IMF says 40% of jobs in emerging markets, 26% in low-income countries, and 60% in advanced economies are exposed. This could deepen the digital divide.
6. Cybersecurity Threats
AI systems are complex and easy to attack. Open-source models are more prone to misuse, as safety checks can be removed.
7. AI Hallucinations
GenAI can produce outputs that look real but are completely false. This is a growing risk, especially in critical areas like health, law, or education.
8. Geopolitical and Socio-Cultural Risks
AI tools can be used to spread propaganda, misinformation, or even support conflict (e.g., autonomous weapons). They can also erase cultural diversity.
9. IP and Legal Risks
AI models often train on copyrighted or protected data, raising legal issues around fair use and IP law.
10. Psychological and Educational Impact
In schools, AI can lead to overreliance and poor retention. In society, human-AI interaction may affect mental health.
What AI Needs to Work Responsibly
Digital Infrastructure and Compute Power
AI needs strong compute and data systems. Training GPT-3, for example, used 570 GB of data and thousands of GPUs. Inference needs less power but still requires serious resources. However, given the abysmal state of even power supply, digital and cellular inclusion, internet speeds, and relevant local content, this remains a distant dream for most developing countries.
Skilled Talent
Students and workers need training in ML, data science, computer science, data handling, and model deployment. These skills must reach Tier 2 and Tier 3 cities too. This is, again, especially important for developing countries if they do not wish to be left behind. I think, at a personal level too, I need to upskill in these domains if I want to have a future-proof career.
Global Models and Regional Partnerships
70+ countries have AI strategies.
Regional cooperation helps countries scale compute, align on standards, and protect data sovereignty.
Principles for Smart AI Governance
1. Make It Agile
Tech moves fast. Rules must adapt. Early on, the challenge is lack of information. Later, it’s lack of control. This is also called Collingridge Dilemma (you can read more about it here if you would like to).

2. Make It Collaborative
Governance must involve government, business, academia, and civil society. Trust depends on open participation.
3. Make It Context-Specific
Governance must treat narrow AI and generative AI differently. The former is task-specific use of machine learning that has been going on since before GPTs and LLMs. The risks and use cases are not the same.
Designing a Strong Governance Process
This is, like, the conclusion of the mini-book. They categorize it into 12 steps, but I think 10 are enough for the material they cover. Basically, for public policy to succeed in the realm of Generative AI, it needs the following (again, summary does not do justice to the nuances within each step - please do consider reading this section in full if you have time):
1. Define Objectives
Build trust
Include all users
Protect rights
Support local innovation
2. Set Priorities
Use public input, legal obligations, and strategic needs.
3. Assess Ecosystem Maturity
Look at digital infrastructure, players in the market, talent, and research.
4. Review Laws and Agencies
Understand what exists. See what needs building or upgrading.
5. Allocate Budget
For new agencies or to modernize old ones.
6. Identify Risks
From data collection to deployment—track the full lifecycle.
7. Pick the Right Tools
Use a mix of ethics codes, laws, standards, and sandboxes.
8. Consult People
Bring in citizens, CSOs, private players—especially those affected most.
9. Coordinate Globally
Work with other countries to avoid race-to-the-bottom rules.
10. Implement and Track
Put rules into action. Monitor, adjust, improve.
Final Remarks
Innovation without responsibility can be dangerous. Responsibility without ambition leads nowhere. We need both.
The goal is simple: keep AI working for people.
Also, international cooperation and knowledge sharing is the key. AI harms don’t respect borders. A biased model trained in Europe can still do damage in Asia or Africa. Privacy violations from one country can spread worldwide. Without international rules, companies will move their riskiest work to countries with the weakest laws.
Hope you enjoyed reading the blog. That is all for today!
Best,
Ahmad Mobeen