ICT4D · Power · Technology

AI 4 who? Can we influence who controls AI technologies

The Digital Power Shift #2 (Feb 2026)

AI is surfacing assumptions about power and control that digital development has long discussed but in reality has largely avoided confronting. This is not because AI introduces new power dynamics, but because it accelerates and concentrates existing ones to a point where they are more visible and harder to ignore.

Digitalisation and Digital Transformation have long been seen primarily as ‘a tool’ – something to deploy, adapt, evaluate. The power shifts that always came with it are often seen as side effects or externalities to be managed, or worse, glossed over entirely. That framing was always wrong – the scale, speed and greater potential impacts of AI on power dynamics now makes it untenable.


This article was posted on Feb 10 as my 2nd LinkedIn newsletter (following issue #1 on Shadow AI). Before I start to write on more granular topics, I wanted to step back and frame the broader AI and power terrain I’ll be exploring over the coming months. If you aren’t already signed up, please subscribe here.


Digitalisation is never neutral

Every digital system embeds choices about who decides, what counts, and whose priorities are encoded and who becomes invisible. This happens through mundane technical work such as which fields exist in a database, or what options in a dropdown (e.g. “Male / Female” as the only gender options excludes sections of society – often unintentionally).

These choices and decisions are fundamentally about power, even if they appear to be mundane technical choices, made in spreadsheets rather than boardrooms.

With the speed of AI, and its even higher concentration of capability in a small number of corporations – things are different. Where changes used to be slow enough to notice and at least have the possibility of correcting – changes are now happening faster than governance structures can respond. The opacity of large language models makes it harder to even see where the shifts are occurring until they’re already in widespread use.

“The development of AI technologies for but not by developing countries can only further amplify existing oppression and biases.” Data and Power: AI and Development in the Global South, Oxford Insights 2024

Localised systems are not locally led systems

Capital and compute required for frontier AI models mostly sit outside local reach; the companies building the most capable systems are concentrated in the US; the infrastructure, talent, investment – none is distributed in ways that give local actors meaningful influence over core development trajectories of the AI platforms their societies and economies will increasingly come to rely on.

Localisation (translation to local language, adapting content to local context, local hosting and deployment) is not the same as local control (despite some in the sector presenting it as such).  If most decisions are made elsewhere and there is no local authority over design, potential harms, or what happens when it goes wrong (or just goes away), it is not locally led.

This gap is increasingly the reality – local actors tasked with deployment, adaptation, and managing downstream consequences, but locked out of decisions that shape what they are deploying. They carry the accountability without holding the controls.

This is not a failure of localisation. Localisation is valuable. But it is not sufficient. And treating it as though it were sufficient risks legitimising unchanged power relations while claiming progress.

So where does that leave us?

The question is not whether AI shifts power. Of course it does. The question is whether development actors still have meaningful opportunities to influence how those shifts evolve and for whom.

“The private sector is setting the agenda for AI integration . . . with limited autonomy for other actors . . . governments and development agencies, including multilateral organisations find themselves confronting persistently limited entry points to influence both design and implementation processes.” Beyond digital rights: Why the development community must reclaim agency in AI (IDS, 2025)

Control, not ownership

Full local ownership of frontier AI is largely unrealistic. The capital barriers are too high. The concentration of capability is too entrenched. Arguing for local ownership of frontier AI models or compute infrastructure is probably not a realistic strategy.

But ownership is often a proxy for the thing that actually matters: control. Who decides how systems are configured; what they optimise for; what data they ingest; when they are switched off; what recourse exists when they fail or cause unintended harms.

These questions remain open even when ownership is out of reach. Procurement choices still matter. System architecture still matters. Governance arrangements, data stewardship, contractual terms: these are not nothing.

The shift in framing is important. If the debate stays stuck on ownership, local actors are positioned as permanently excluded. If ownership is not realistic, the debate moves to control, and the real question becomes: what opportunities exist to influence who controls the technology?

An example: an NGO may not own the frontier AI model its services rely on, but it can still control how it is deployed, whether its data is allowed to train the model, how outputs are audited, under what conditions the system can be withdrawn.  It can also ensure that the system architecture makes it relatively easy to swap it out for a different model if needed.

Where opportunities for influence might still exist

I am not arguing against AI. Engagement with AI is unavoidable for many, and it brings enormous opportunities and potential benefits. Localisation and local adaptation do matter, of course they do – but they are insufficient on their own to make any real shifts in power or control.

Windows of opportunity are still open for us to make such shifts a reality.  Some examples of levers others are using:

  • Procurement decisions can favour interoperability and open solutions over vendor lock-in (the US changed federal AI contracts to include vendor lock-in protections and prohibit use of government data to train commercial models without consent) (Ropes & Gray, 2025)
  • System architectures can use DPI-style building block approaches to preserve exit and migration options (India’s UPI stack was built as open rails, 350+ banks transact through it, no single vendor controls it (Egon Zehnder, 2023)
  • Governance arrangements can require transparency, accountability, audit rights (The EU AI Act requires model evaluations, adversarial testing and mitigation strategies for high-risk systems before they can be deployed – though quite how to define ‘high risk’ is another minefield (ISACA, 2024)
  • Data stewardship frameworks can limit what gets extracted and by whom (Kenya requires all public cloud services to comply with local data sovereignty laws (The Cable, 2025)
  • Collective bargaining such as the African Union’s 2024 AI Strategy can be stronger than going it alone (New America, 2025)

With recent funding cuts, everyone is competing for shrinking resources and the space is more fragmented than ever before.  Fragmentation and decentralisation may seem similar but where fragmentation dissipates power, decentralisation distributes it – and decentralised actors choosing to come together collectively to pool resources and collaborate – that has the potential to magnify power instead.

But the qualifier matters: collectively. Coming together, maybe we can have more meaningful impact than in our silos?

What next – opportunities to collaborate or work collectively?

If you’re working on these questions too, there’s not a lot of opportunities to meaningfully collaborate but some jumping off points that may be helpful:

  • #ShiftThePower is a great community (but not too well represented by digital/AI folk)
  • If you work in MEL, the MERLTech NLP-COP is worth checking out
  • If you’re in Africa, there is AI4D
  • The recently launched Alliance for Inclusive AI
  • Global Partnership on AI (though realistically out of reach for most of us)
  • Shameless self-promotion: I also host a community of practice on the intersection of digital, AI and power. It’s pretty nascent but please request to join if interested – I’ll be convening some peer-learning conversations in 2026.  Oh, and I’m starting a research project on AI’s impact on philanthropy and grant-making so hopefully I’ll have some findings to share from this in the coming months.

What this newsletter will continue to explore

In 2023, I wrote about how Digital Transformation could be a tool for shifting power or a way of entrenching it; in 2024 at ICT4D Ghana I convened a workshop exploring whether DX work could be done differently with a power lens. AI has only sharpened those questions.

This newsletter is where I’ll continue work through what this might mean in practice: issues of power and control as they play out in AI adoption and digital transformation.

I don’t claim definitive answers, but I’ve spent long enough in this space to know the questions themselves matter.

Don’t panic – not every post will be on complexity and power dynamics, I’ll also be sharing practical tips from my own AI learning journey, extracts from my research work, and inviting guest writers to share lessons from the field. Don’t forget to subscribe!

How about you? What have you seen succeed or fail when it comes to local control of AI (and who else should we be following?)

Leave a comment