ICT4D

Are we using AI as transitional scaffolding (why today’s productivity gains could backfire)?

Who will pay for a report when AI can just write it for them?

I used AI to produce a 20-page briefing deck last month as a favour for a friend. I typed the question into Perplexity, got a solid evidence scan, did some analysis in Claude, pasted the results into NotebookLM, and had a polished slide deck in under an hour. The content wasn’t bad. It was genuinely useful to the person who’d asked for it.

But here’s what nagged me afterwards: the only value I added was knowing which tools to use. I didn’t do original research. I didn’t interview anyone. I didn’t bring sector expertise that couldn’t be found in publicly available sources. I translated existing knowledge into a consumable format, faster than the client could have done it themselves.

For now.

AI as a glorified translator?

Reports haven’t disappeared yet. But the reason they persist is less reassuring than it sounds.

Organisations still commission landscape reviews, strategy decks, and policy briefs because that’s how decisions get justified. The workflows demand documents. Budgets are structured around deliverables. Governance processes expect written artefacts as proof that thinking has happened.

AI is being used inside these expectations, not to replace them. We’re generating the same outputs, in the same formats, for the same institutional reasons. Just faster and cheaper.

I’ve started calling this transitional scaffolding. AI is functioning as a compatibility layer between two worlds:

  • World A: information is scarce, synthesis is slow, access is uneven, and reports are the containers that move knowledge from people who have it to people who need it.
  • World B: retrieval is ubiquitous, synthesis costs almost nothing, and anyone can reformat information on demand.

Most organisations are structurally still in World A. So AI gets used to accelerate World A workflows. That feels productive. It is productive, for now. But it’s probably not the final destination.

What’s actually being accelerated is translation value, the work of turning widely available information into a form someone else can consume because they lack the access, time, tools, or confidence to do it themselves. That’s a huge amount of current knowledge work. AI doesn’t just allow us to speed this work up, it potentially removes the very conditions that made it necessary…

AI helps those who can help themselves

For now anyway.

There’s a pattern in the adoption that most commentary misses. The first people to benefit from AI in knowledge work are the people who could already do the work without it. Consultants, analysts, strategists. People with existing skills, using AI to compress what they already knew how to do. Same deliverables, faster turnaround, more polished outputs.

This phase flatters expertise. It feels like augmentation. It feels durable.

It probably isn’t.

The next wave is likely to look completely different. It’s when AI is used really effectively by decision-makers, programme leads, managers, domain specialists who never had (and never needed) the full analytical toolkit. They don’t want a report (be honest, they never did) – what they really need is information to guide decisions: “I need to decide X. What do I need to know, and what are my options?” – increasingly, they can just get that knowledge with AI’s help, directly, no intermediary required.

The first wave empowers people who already know how to do the work, helping them do it faster.

The second wave will empower people who never needed to learn how to do the work at all, because they can jump straight from intent to outcome. That doesn’t just speed up report-writing, it removes the main reason anyone would ask for one in the first place.

What happens when nobody needs reports?

Imagine a programme director preparing for a funding decision. Instead of commissioning a consultant to spend three weeks producing a landscape report, she opens an AI interface and asks: “What do we know about cash transfer programmes in East Africa? What’s worked, what hasn’t, and what are the open questions?”

She gets a usable answer in minutes. She probes it. Challenges the assumptions. Asks for counter-evidence. Tests scenarios. Iterates until she has what she needs.

No one writes a report. No one is asked to. The 40-page PDF with an executive summary and a methodology annex simply doesn’t get created, because the question that would have generated it got answered during the conversation in which the data was needed.

This isn’t happening yet (much) – mostly due to institutional lag, technical literacy and, yes problems with the end-result (biases and hallucinations primarily, but with each model release, these get incrementally improved). It will happen…

Consultancy as translation stops making sense

This is the part that matters, and it’s subtler than “the death of the report.”

What disappears is not reports as a category. It’s reports as a default. Routine explainer documents. “Overview of X” decks. Secondary landscape scans. Literature reviews with no original stance. Consultant-as-translator roles. Anything whose only value is due to informational asymmetry.

And let’s be honest, that applies a lot of today’s outputs.

But the content doesn’t disappear. It evolves into machine-readable content (structured, modular, continuously updated, never read end-to-end by a human) that feeds retrieval systems, supports audit trails, tracks evidence.

Human-written content shrinks in volume but rises in stakes. It becomes interpretive, opinionated, situated, explicit about uncertainty. Written only when humans need to align, decide, argue, justify, or persuade.

The middle layer, the translation layer, collapses. That’s where a lot of us have been operating.

What role survives the transition?

Even in an AI-native world, three things stay irreducible.

  1. Judgement. What matters here, what to ignore, what trade-offs to surface, what uncertainty is acceptable. AI can summarise what is known. It cannot decide what matters in this institution, for this decision, at this political moment.
  2. Accountability. Someone still has to stand behind the claims. Be responsible for consequences. Justify decisions under scrutiny. AI generates text. It cannot be held accountable. That matters in policy, in funding, in regulation, in anything where being wrong has costs.
  3. Power and legitimacy. Many reports were never really epistemic objects. They were political instruments, signalling devices, alignment tools. AI doesn’t remove that dynamic. It makes it more visible, and more contested. When the informational justification for a report falls away, what’s left is the political one. That’s uncomfortable, but it’s honest.

Personally, I look forward to this – these may be complex, but they feel like the most valuable and interesting parts of a consultant’s role, I’d rather rise to this challenge than spend my time on yet more market and landscape reviews.

What does this mean for consultants like me?

Our roles will almost certainly move away from information aggregation, secondary research synthesis, and definitely kill the sub-species of consultant who thrived in the past by “being the smart person in the room.” AI kills all those roles cleanly.

In the short-term – being better at using AI than clients are is probably enough of an edge, but in a year or two..? What survives then?

  • Framing decisions before AI is even asked (because while AI may be very good at answering questions – it tends to be pretty bad at noticing the question itself is wrong – for now at least!) – basically slowing down bad questions becomes even more important as it is now so quick to get useful seeming answers to them.
  • Narrowing options to what’s actually viable given politics, incentives, and organisational capacity (a lot of AI-generated suggestions are technically reasonable but dead on arrival for political or personal reasons).
  • Absorbing accountability (nobody is letting an algorithm carry the blame!)
  • Designing decision processes, not producing decision documents – this is the most exciting shift and I am already seeing this in some of my work.

Consultants won’t survive by being faster analysts but by being responsible collaborators in decisions that still carry risk, in a world where the time to decide is shorter and the cost of being wrong is higher.

So what am I actually doing about this?

Right now, I’m still mostly in Phase 1 – using AI to do the work I’ve always done, faster and better. But I’m not pretending that advantage is permanent. Given the likely direction of travel, I’m also learning to build tools, design AI-assisted workflows, start to help people with processes and products and focus less on knowledge as an output in its own right.

If reports are temporary artefacts, custom tools might be the durable ones. Something that helps a specific team make a specific decision, built for their context. That used to be absurdly expensive. With vibe coding, you can prototype one in a day. Why wouldn’t you?

Where’s everyone else heading?

I don’t have a neat conclusion for this. The diagnosis feels right to me, but I’m genuinely uncertain about what comes next, both for the sector and for my own work. If reports as translation are on the way out, what are you building instead? What does your work look like when the client can do the synthesis themselves?

I’d rather be asking that question now than discovering the answer too late.