What a Tech Startup Taught Me About Why International Development Needs to Change

Tech companies obsess over users. Development organizations often obsess over projects.

For nearly 25 years, I worked in international development. During the pandemic, I did something unexpected: I started a tech company in Uganda.

I went through Techstars, one of the most competitive startup accelerators in the country, and suddenly found myself immersed in a very different operating culture, one where my peers were relentlessly focused on users, speed, data, and constant adaptation.

The experience was disorienting, but it was also clarifying. It made one thing impossible to ignore: We do not design international development programs the way we say we do. In reality, if we truly believed participants were at the center, our programs would look very different.

The Core Problem: We Don’t Design with Real Clients in Mind

Tech companies are relentlessly focused on product–market fit: the point at which a product solves a real problem so well that people continue using it, recommend it to others, and integrate it into their daily lives.

International development rarely holds itself to the same standard. Instead, we design projects that are fully specified before we even meet our participants and are optimized to satisfy donor expectations over participant needs. Once implementation begins, these designs are often locked in for three to five years, making adaptation difficult—even if early signals suggest changes are needed.

The truth is, we wouldn’t even know if we did have product-market fit—we don’t track the data that would tell us if we did, or how to achieve it. But we could.

Lesson 1: Start with a Minimum Viable Product, Not a Perfect Project

In tech, teams start with a Minimum Viable Product (MVP): the simplest version of a product that works. It’s not polished, and it doesn’t include every feature, but it meets a core user need and can be tested quickly in the real world.

Development programs typically take the opposite approach. We design comprehensive models from the outset, layering on components we believe are necessary: training modules, coaching sessions, asset transfers, group meetings, and a variety of complementary activities. But once implementation begins, we often do not know which of these components participants value most, which ones actually drive outcomes, and which ones participants could do without.

When budgets tighten, we scale back. Components are sometimes removed based on donor priorities or anecdotal feedback from field staff rather than clear data about what participants find most useful.

A Minimum Viable Product approach would reverse this logic: start simple, learn fast, add what works when it’s needed.

Lesson 2: Test in Context

As tech products evolve, companies constantly run A/B tests, which compare two versions of a feature to see which performs better based on real user behavior. It’s fast, data-driven, and users typically don’t even know it’s happening. The resulting data allows teams to respond quickly to what people actually want.

In development, we pride ourselves on being evidence-based, but that evidence often comes from different projects, different populations, and vastly different contexts.

A more practical approach would be to test different program elements in real time. Two training approaches could run in parallel. Two coaching models could be piloted simultaneously. Programs could then adjust based on what participants respond to most strongly.

Rather than locking in a design years in advance, we could allow learning to shape implementation as it unfolds.

Lesson 3: Learn in Real Time

At the heart of product testing is real-time data. Technology companies track user behavior continuously because feedback loops are how products and user experiences improve.

In development, data collection often follows a very different pattern. We gather baseline data before participants have meaningful experience with a program. Then we collect endline data after implementation concludes—when it is too late to improve the participant experience.

Yet programs generate enormous amounts of interaction data every day: coaching visits, training sessions, check-ins, and participant feedback. Properly used, this information could help programs identify problems early and adjust quickly.

We talk to participants constantly. We just do not learn from them fast enough.

Lesson 4: Drop-Off Is a Design Signal, Not a Participant Failure

In tech companies, user drop-off is treated as a design problem to be fixed. In development, when participants disengage or drop out, we often frame it as their failure: they weren’t motivated, they didn’t understand the value of our program. But rarely do we ask: Where did our design lose them?

But drop-off is data. It signals that something isn’t working. We should be responding to it immediately—not explaining it away in final reports. Tracking when and why participants disengage can reveal which program elements are effective and lose their interest, and we can adapt immediately.

Lesson 5: Design for Dignity—and Yes, for Joy

As tech products mature, teams invest heavily in User Experience (UX) and User Interface (UI) design to make products functional, logical, and easy to use—as well as visually pleasing. Positive experiences with tech create loyalty to a particular app or software. But in development, we rarely design for delight.

In international development, we need to give the same attention to user experience. Participants—often women whose days are already filled with work, caregiving, and household responsibilities—are asked to attend long trainings in uncomfortable settings using materials that feel generic or uninspiring. When engagement drops, we sometimes attribute it to shyness or culture.

But the explanation is often simpler. We built a boring product, or worse, something that feels like another burden.

When programs are designed well, participation becomes something women enjoy. The environment is welcoming. Their voices are heard. Their progress is acknowledged. They leave sessions with greater confidence than when they arrived. Designing for dignity means designing experiences that signal respect: respect for women’s time, intelligence, ambition, and potential.

Technology companies understand that people return to products that give them positive emotional responses. Development programs should aspire to the same.

From Theory of Change to Hypothesis-Driven Development

Traditional development relies on fixed Theories of Change: if we do X, Y, and Z, particular outcomes will follow. Tech takes a different approach with hypothesis-driven development, where features are treated as experiments and learning is continuous. By explicitly defining, testing, and measuring hypotheses, teams can pivot or persevere based on data-driven insights.

Instead of locking ourselves into a single approach and evaluating it years later, we should constantly evolve our approach based on what participants respond to in the moment.

“But Tech Isn’t International Development”—And That’s True

Many will say that a tech start-up isn’t the same as running an anti-poverty program. Of course, they’re right. But there are key similarities: both aim to reach new users, both must maximize limited resources, and both rely on learning to iterate and improve. Tech has transformed the lives of millions of low-income people not by seeing them as beneficiaries, but as underserved clients.

I’ve worked in financial inclusion for three decades. The scale of inclusion achieved by fintech in the last ten years far exceeds what the development sector achieved in the many decades before it. Now, we’re seeing the same in education, where people access information on demand, and in agriculture, where smallholder farmers use agtech to reach new markets and improve farming practices.

Development has been trying to find cost-effective solutions to these challenges for decades. Meanwhile, tech did it by building good products that people actually want and that are designed for them.

The Real Shift We Need

International development needs to start treating participants as underserved clients and programs as products people choose to engage with. Our goal should not be to secure participation in order to meet project targets. It should be to design offerings that align with people’s aspirations and adapt as their needs evolve.

But the deeper shift is cultural.

It requires us to stop assuming that people should be grateful for the programs we deliver. We should ask a simple question: would someone choose this if they truly had a choice?

When programs are designed well, participation does not have to be pushed—it is pulled. People show up because the experience respects their time, strengthens their confidence, and helps them move toward the future they want.

If we begin designing programs the way great companies design products—by focusing on user-centered design, testing, and adaptation—we could dramatically improve both impact and efficiency in our international development programs; not by abandoning our values, but by finally designing like we believe them.

————–

Lauren Hendricks brings over 30 years of experience in the humanitarian and development sectors across Africa, Europe, Latin America, and Asia. With a focus on sectors such as financial inclusion, agriculture, SME development, gender inclusion, women’s empowerment, and technology, she is committed to ensuring marginalized communities have the information and resources they need to thrive. […]

Related Story

Economic Progress Alone Does Not Equal Empowerment

Lessons from New Evidence on Social Protection and Economic Inclusion By Margherita di Clemente and Maja Gavrilović In Graduation and economic inclusion programs, success is often measured through increases in income, assets, or savings. These gains matter, but...