top of page

Build Survey
. .
Because most organisations have a theory of change and a MEL framework, but struggle to turn them into practical survey tools. The Build Survey module exists to:
• Help teams move from “we have indicators on a spreadsheet” to “we have live surveys that measure those indicators”.
• Make it easier to design surveys that are clearly linked to stakeholder groups and stages of the theory of change.
• Reduce the time and technical skill required to design good quality surveys that donors, investors, and communities can trust.
Instead of each MEL officer starting from a blank page, the module sits on top of your project setup, stakeholder mapping and theory of change, and turns that design work into concrete survey questions.
The Build Survey module does two connected but distinct jobs:
1. Survey design (inside the platform)
• Aligns every survey to a specific stakeholder group, stage of the theory of change, and set of indicators.
• Surfaces pre-curated questions that are already tagged to themes, sub-themes and indicators.
• Lets you build stakeholder-specific surveys in minutes rather than weeks.
2. Survey delivery (via a mobile app)
• Offers a standard data collection app that field teams can use offline and sync when they reconnect.
• Works in a very similar way to Kobo / ODK in terms of basic data capture.
In short: tools like Kobo focus mainly on data collection, while this module focuses on designing the right survey, for the right stakeholder, at the right point in the theory of change, and then hands that survey to the mobile app for collection.
The module is designed so that every survey has a clear line of sight back to your theory of change. When you build a survey, you:
• Select a stakeholder group (for example, “women in Village A”, “project staff”, “local government”).
• Select a stage of the theory of change (for example, short-term outputs or longer-term outcomes).
• Select the themes, sub-themes and indicators you want to test.
The platform then uses these selections to suggest appropriate questions. This means that:
• You always know which assumptions or changes you are testing.
• Later, when you look at your data, you can see which part of the theory of change each survey is speaking to.
Behind the scenes, the platform maintains a structured question bank where each question is tagged with:
• Themes and sub-themes
• Relevant indicators
• Stakeholder types
• Links to standards and frameworks where relevant (for example SDG / ESG tags)
When you set the survey context (stakeholder, theory of change stage, themes and indicators), the module filters this bank and shows you only the questions that match those selections. You can then:
• Browse the suggested questions
• Preview the wording and response options
• Add the ones you want to your survey
This allows you to design surveys that are grounded in your theory of change and your standards, without having to reinvent questions every time.
Yes. The Build Survey module supports both pre-curated and custom questions.
• You can add custom questions directly in the builder whenever the suggested list does not fully cover what you need.
• Custom questions go through an internal approval step (for example by your MEL lead or project manager) to check clarity, quality and alignment with project goals.
• Once approved, they can be added to the survey and, if appropriate, stored in your project-level question library for future use.
This keeps surveys flexible and context-sensitive, while still protecting overall quality.
Yes. The module includes tools to help you design surveys that make sense for respondents and respect their time.
You can:
• Group questions into sections (for example: demographics, training, decision-making, safeguards).
• Set questions as required or optional.
• Add custom instructions or hints for enumerators and respondents.
• See an estimated survey duration based on the questions you have selected.
The aim is to move away from one long annual survey, and towards a mix of shorter, focused surveys that can be integrated into routine work.
The module supports multi-language surveys.
• You design your survey in a base language.
• You can then generate draft translations using an integrated translation service (for example, Google Translate).
• Local staff or translators review and refine the translations to make sure they are accurate and culturally appropriate.
This approach reduces the typing burden and keeps human judgement where it matters most: in ensuring that questions make sense to the people being asked.
Yes. Surveys that you design in the Build Survey module are deployed through a mobile data collection app that:
• Runs on standard smartphones.
• Works offline in the field and then syncs data once a connection is available.
• Shows each data collector only the surveys and stakeholder groups that they have been assigned.
From a field team perspective, the experience will feel familiar if they have used tools like Kobo or ODK. The added value is that the surveys they see are already aligned to your theory of change, indicators and stakeholders.
Yes. Once a survey is built, you can:
• Set it up as a one-off survey (for example, a baseline or endline).
• Schedule it as a recurring survey (for example, a short pulse survey every quarter).
• Specify start and end dates or desired intervals (for example, once per year with a fixed window).
• Assign surveys to the relevant projects, sites and data collectors.
Over time, this allows you to build a clear survey calendar, so you can see what is being asked, of whom, and when.
The module includes a simple sample size calculator that helps you decide how many responses you need from a given stakeholder group.
• When you mapped stakeholders earlier in the platform, you entered approximate group sizes (for example, number of teachers, number of smallholder farmers).
• The sample calculator uses that number plus your desired confidence level to suggest a suitable number of respondents.
This makes it easier to design surveys that are both realistic and statistically meaningful, without requiring a statistician on every team.
Yes. The module is designed to support both:
• Longer, more detailed surveys (for example, an annual outcomes survey).
• Shorter “pulse” surveys that focus on a small number of questions and can be integrated into routine contact with stakeholders.
The aim is to give organisations more flexibility. You can continue with occasional, in-depth surveys if that is what your context allows, while gradually shifting towards lighter, more regular data collection as capacity grows.
The platform has two main parts: a web dashboard and a mobile data collection app.
Web platform (design & analysis)
• Used by project teams, MEL staff, and managers.
• Runs in a standard browser (no special software).
• This is where you set up projects and sites, map stakeholders, build your theory of change, design surveys, track risks, and view results.
Mobile app (data collection)
• Used by field staff, community facilitators, or other data collectors.
• Installed on Android smartphones (low to mid-range devices are usually sufficient).
• Data collectors log in and see only the surveys they have been assigned.
• Surveys can be completed offline. Responses are stored securely on the device and synced automatically when a data connection becomes available.
This setup means design, governance, and analysis can happen from HQ or regional offices, while data collection happens in the field – including in low-connectivity environments – without losing data quality or control.
(You can easily tweak “the platform” to “Reflect for Carbon” or “Reflect” depending on where this sits.)
The indicators in the survey builder were not chosen at random or copied from a single framework. They were built through a layered process that combines standards, evidence, and practice:
1. Start from recognised standards and frameworks
We reviewed relevant global and sectoral frameworks (for example SDGs, ESG categories, NbS and carbon project standards) and extracted the social and governance dimensions that matter most for people and communities.
2. Ground them in a social Theory of Change for NbS
We mapped how nature-based and carbon projects create social value and risk – across themes like livelihoods, participation, decision-making, safeguards, equity, and wellbeing. Indicators were then designed to speak directly to these pathways of change.
3. Align with real project practice
We looked at existing project documents (such as PDDs, revenue-sharing agreements, safeguarding policies, and MEL frameworks) to ensure the indicators reflect what projects actually do and promise, not just what standards say on paper.
4. Co-design with practitioners and researchers
Draft indicators and question sets were refined in collaboration with project teams, social scientists, and MEL practitioners who work in NbS and carbon markets. This helped make sure they are:
• Meaningful to communities and practitioners
• Practical to measure in the field
• Useful for both learning and reporting
1. Tag and test in the platform
Each indicator is then:
• Tagged to themes, sub-themes and stakeholder groups
• Linked to survey questions in the question bank
• Tested and refined through pilots and early adopters, with ongoing adjustments based on feedback and data quality.
• The result is an indicator set that is standards-aware but not standards-bound: robust enough for investors and certifiers, but rooted in lived experience, safeguarding obligations, and social value on the ground.

bottom of page