Research Methodology · Bitcoin UX Africa

How We Test & Research

Every number we publish comes from a real session, with a real African user, in front of a real Bitcoin wallet. Here is what we hold ourselves to.

340+
Moderated
sessions run
5
African
countries
12+
Bitcoin wallets
tested
100%
Open &
published free

Research Built on Real Sessions

Bitcoin UX Africa conducts moderated usability testing — the gold standard of UX research. Unlike surveys or analytics, moderated sessions let us observe exactly where users hesitate, fail, and abandon tasks in real time. We see what they see. We hear what confuses them. We record what breaks.

Our research is independent. We are not funded by wallet teams, exchanges, or payment processors. No sponsor influences which findings we publish or how we frame them. This independence is what makes our data citable.

All findings are published openly and free to use. Wallet teams, designers, researchers, and developers can use our data without restriction.

What Every Session Must Deliver

We don't publish our session protocols — that's proprietary. What we do publish are the standards every session is held to, because those standards are what make the data trustworthy.

1
Participants Are Real, Local & Uncoached
Every participant is recruited in-country, meets strict screening criteria for Bitcoin experience level, and has no prior exposure to the tasks being tested. We do not use online panels, proxy participants, or compensated repeat testers. The data reflects genuine first encounters.
2
Observation Without Interference
Our sessions are designed so that what we observe is what would happen without us. Moderator influence on participant behaviour is controlled for and documented. A session where the moderator guided a participant to success is not a completed session — it's a failed one.
3
Failure Is the Data
We measure abandonment, confusion, and error — not just completion. A session where everything went right is less valuable than one where it didn't. Our entire analysis framework is built around failure states: where they occur, what triggers them, and whether they are recoverable.
4
Quantitative & Qualitative Together
Numbers without context mislead. Every completion rate, failure rate, and time-on-task benchmark we publish is paired with qualitative evidence — the words, hesitations, and expressions that explain the number. Neither layer is published alone.
5
Cross-Market Validation Before Publication
A finding from one country is a hypothesis. A finding that holds across three or more independent country datasets is a result. We do not publish single-market findings as universal African insights — a standard most Bitcoin UX commentary ignores entirely.
6
Reproducibility Over Novelty
We run the same core task battery across every session, every country, every wallet. This is deliberate. Consistency over time is what produces benchmark data. We track whether Bitcoin UX is getting better or worse — and that requires measuring the same things the same way, year after year.

Five Countries, One Dataset

Our sessions span five African countries chosen to represent the diversity of the continent's Bitcoin adoption landscape — different languages, different mobile networks, different economic contexts, and different levels of existing financial infrastructure.

Kenya
M-Pesa context
High mobile money literacy
Nairobi + rural sessions
Nigeria
Largest session volume
High Bitcoin awareness
Lagos + Abuja
Ghana
Mobile money users
Growing Bitcoin adoption
Accra sessions
South Africa
Banked population mix
Diverse device range
Johannesburg + Cape Town
Ethiopia
Lower smartphone penetration
Amharic language context
Addis Ababa sessions

What We Test

Wallet selection criteria

We test wallets that are available on Android, recommended to African users by community channels, or used in significant volume in our target markets. We do not accept payment to include or exclude any wallet from testing.

Task set

Our standard task battery covers the complete self-custody onboarding flow: installation, account creation, seed phrase backup, receiving Bitcoin, sending Bitcoin, and reading transaction history. Additional task sets cover Lightning Network payments and wallet recovery from seed phrase where relevant.

Devices

Sessions use the participant's own device where possible, or a representative mid-range Android device common in the relevant market. iOS is tested in markets with significant iPhone adoption. We do not test on high-end flagship devices — our goal is to reflect real user conditions, not ideal ones.

Versions and dating

Every published finding notes the wallet version tested and the date of testing. Wallet UX changes with updates. We retest wallets that ship significant UX changes and update findings accordingly.

AI Assists.
Humans Decide.

We use AI in specific, bounded parts of our research process. Not to replace observation — nothing replaces sitting across from a real user watching them fail. But to handle volume work that would otherwise limit the scale of what a small team can analyse and produce.

Every AI-assisted output in our process is reviewed, validated, and signed off by a human researcher before it influences any published finding. We do not use AI to generate insights. We use it to surface candidates for human review.

1
Transcript Analysis at Scale
340+ session transcripts. Manually coding all of them for failure patterns, hesitation moments, and emotional signals takes weeks. We use LLMs to process transcripts, cluster recurring failure moments, and flag observations that appear across multiple sessions and countries. A human researcher then validates every cluster before it becomes a finding. AI compresses weeks into hours. The researcher still makes the call.
2
USSD Flow Simulation
USSD interfaces are text menus. Before putting a real user in front of a USSD Bitcoin flow, we run an LLM through it as a synthetic first-pass — testing whether the menu logic is navigable, whether error paths lead anywhere useful, whether a user following only what's visible on screen could complete a task. This catches obvious structural failures before the first real session. It does not replace real sessions. It makes them more productive.
3
Localisation Drafting
The problem with Bitcoin wallet localisation in Swahili, Hausa, and Amharic is not translation — it is concept reconstruction. "Seed phrase" does not have a natural equivalent in most African languages. We use AI to generate multiple candidate phrasings for core Bitcoin concepts in target languages, which native-speaker reviewers then evaluate and rewrite. AI removes the blank-page problem. Native speakers determine what actually works.
4
Design Pattern Drafting
We are building a shared pattern library for African Bitcoin design — structured documentation of what works, what fails, and why. Turning a research observation into a usable pattern card requires consistent structure: problem statement, evidence, recommended approach, contraindications. We use AI to draft pattern cards from validated findings, then researchers edit them for accuracy and completeness. Consistent structure at scale would otherwise require a team we don't have.
5
Between-Session Design Critique
Between research cycles, wallet teams sometimes need rapid feedback on a design change before the next real session. We can prompt an AI model with our full research corpus and a specific user profile — a rural Kenyan feature phone user, a Lagos trader with intermediate smartphone literacy — and get a structured critique of proposed design changes. This is explicitly not a substitute for real testing. It is a cheaper first filter that improves what gets tested in the next real session.

What AI Does Not Do in Our Research

AI does not generate findings. It does not replace moderated sessions. It does not validate its own output. It does not interact with real users. Every number, every failure rate, every design recommendation we publish traces back to a real person in a real session. AI helps us process and communicate what we find. It does not find it.

Research Ethics

All participants give informed consent before sessions begin. No participant is identified in published findings — quotes and observations are anonymised by country only, never by individual.

Participants are compensated fairly for their time at local market rates. We do not recruit through deception or misrepresent the nature of sessions.

Session recordings are stored securely and are not shared with wallet developers or third parties. Aggregated findings are published; raw recordings are not.

Our research is designed to benefit African Bitcoin users first. We publish everything openly so the entire ecosystem can act on what we find.

Citing Our Work

Our research is published openly and free to cite. When citing Bitcoin UX Africa research, please use the following format:

Bitcoin UX Africa. (2026). [Post title]. Retrieved from https://bitcoinux.africa/blog/posts/[slug].html

If you are building on our data in academic or commercial research and would like to discuss the methodology in more detail, contact us at mark@foundation.africa.

See Our Research in Action

Our methodology produces findings that are directly actionable for Bitcoin wallet designers and developers. You can see how this Bitcoin UX research methodology shapes real outputs across our published work:

Every finding published on this site was produced using this methodology. Nothing is estimated or sourced from secondary data — every data point comes from a real African user in a moderated session.

Want to apply
this research?

We work with Bitcoin wallet teams to translate findings into shipped design improvements.

Get UX Support →