top of page

The Knowledge Base
The Knowledge Base is the shared analytic library under ValueScope - the back end. It lets you hold lived experience, relationships, systems, and empirical outcomes together, so social evidence is decision-ready, comparable, and useful for learning.
Because real life is complex. The back end brings together individual experience, relationships, systems, and empirical outcomes in one structure so you can evidence the full story of change, not just headline activities. We model four kinds of evidence together: (1) subjective accounts of wellbeing, dignity, trust; (2) observable changes in behaviour like school attendance; (3) relational dynamics such as inclusion, voice, and power; and (4) political and economic systems factors like mandates, budgets, or policy shifts. Each theme and sub-theme has indicators and question banks so you capture nuance without losing structure. This lets a single dataset answer human questions (“Do people feel safer?”) and compliance questions (“Which SDG does this contribute to?”) without rebuilding your framework.
It solves the “we know we create change, but we cannot prove it” problem. It gives you a ready-made analytic backbone to describe, measure, and defend impact in ways communities, funders, and regulators can trust. In practice, themes → sub-themes → indicators → questions → dashboards create a traceable line from lived experience to decisions. Because everything carries provenance (who/when/how), you move from narrative claims to traceable, decision-ready evidence.
Every theme and sub-theme is tagged to relevant frameworks, so one round of data collection can serve multiple reporting requirements. Each sub-theme carries multiple labels (SDG targets, ESG pillars, resilience dimensions, certification requirements). When you tag a result to “participation” or “economic resilience,” the platform already knows which standards it relates to. You retain local meaning while gaining cross-project comparability.
Integration is where we sit with you, map your work and intended changes, and connect them to the existing analytic library, adapting or extending it where needed. It is a short, guided process that turns the shared library into your library. Outputs include a tailored outcome framework, pre-configured surveys and question banks, and a data model wired to your sites and roles. You keep your language and practice; we supply the analytic scaffolding that makes it consistent and defensible.
Yes. The hard conceptual work lives in the back end; you bring your practice. ValueScope bakes in social-science rigour - validated scales where appropriate, clear definitions of constructs, and conservative rules to avoid over-claiming. Templates flag when a subjective item should be paired with an observable or relational indicator. Tooltips explain why an item exists and how to interpret movement up or down. Your team focuses on doing the work; the platform guides how to evidence it.
Each project deepens the analytic framework. When we enrich, for example, political-economy indicators for care reform or movement-building markers for youth organising, those updates are versioned and added to the shared library and available to all clients.
If you do not look for the full range of human experience, you do not measure it, and it will not count in decisions. We centre a relational epistemology: knowledge is co-created with people, not extracted from them, with ongoing / withdrawable consent. We also draw on integral and systems thinking: the subjective self, the observable individual, the relational field, and the system (institutions and political economy). Designing for all four avoids the trap of counting activities while missing the shifts in agency, norms, and power that determine whether change sticks.
It translates “transformation” into trackable sub-themes. For a paediatric surgery programme, that might include dignity and self-worth, caregiver capacity, family stability, community inclusion, access to services, educational participation, and economic resilience. Each has indicators (subjective and observable) and suggested questions. Over time you see trajectories, not anecdotes: which domains move first, which lag, which sustain. You can explain why your approach works and where to adapt.
Yes. It reframes MRV from pure compliance to learning - clear structure, traceability to standards, and psychological safety to ask hard questions. MRV processes often reduces insights to “did you hold meetings and share revenue?” We add the missing layer: meaningful participation, trust and safety, distributional fairness, and service responsiveness, all mapped to certification language. Developers gain structure to talk about people with the same confidence they talk about hectares and credits.
Yes. It treats political economy, power, relationships, mindsets, practices, networks, and narrative shifts as measurable domains. Evidence can be qualitative (e.g., narrative shifts) or quantitative (e.g., network measures, budget line movements). You can link these to people-level outcomes, showing how system shifts enable or hinder lived change.
A tailored data model tied to your projects and sites; ready-to-use instruments (surveys and question banks across subjective, relational, systemic, and empirical measures) with consent language; and a living relationship with the shared library. As we extend the library for new use cases, you benefit automatically, keeping evidence current without losing comparability.
A case management system and a learning infrastructure sit at different levels of your work, even though both use data.
What a case management system does
A case management system focuses on individual people or cases. It is designed to:
• Register and track individual beneficiaries, households, or incidents
• Record services provided, referrals, follow up, and closure
• Support frontline staff to manage workloads and ensure that no one is missed
• Produce operational reports on numbers served, case status, and compliance
It is very good at transactions and follow through: who received what, when, and from whom.
What learning infrastructure does
Learning infrastructure focuses on patterns, change, and decision-making. It is designed to:
• Map stakeholders and power, not only beneficiaries
• Connect activities, revenue flows, and social value through a shared Theory of Change
• Collect structured feedback from multiple groups (for example communities, staff, partners, local government, investors)
• Track risks, harms, and safeguarding concerns alongside benefits
• Turn data into insight that can inform strategy, governance, and course correction
It is very good at questions such as:
• What is changing for different groups, and why?
• Who is benefiting, who is left out, and who may be harmed?
• How do revenue sharing, governance, and project design influence social value over time?
The role of the human account manager
The learning infrastructure is not just software. Each client works with a human account manager who:
• Helps set up stakeholder mapping, Theory of Change, risk registers, and survey plans
• Supports the team to interpret results and connect them back to real decisions
• Keeps an eye on patterns across projects and prompts questions such as, “What are we learning here?” rather than “What number fills this box?”
This support is important because learning is as much about practice and judgement as it is about data.
The sociological knowledge base underneath
The platform is built on a sociological and social science knowledge base, which includes:
• Curated themes, sub-themes, and indicators linked to social value, power, equity, and risk
• Question banks tested in real projects, aligned where relevant with standards and ESG frameworks
• Assumptions about how change happens that draw on relational well-being, safeguarding, and political economy thinking
This means that teams are not starting from a blank page. The “infrastructure” already carries a lot of the methodological work that a MEL or social performance team would otherwise have to create and maintain on its own.
How they work together
In many organisations, the ideal scenario is “case management plus learning infrastructure,” not one or the other:
• The case management system ensures individual duty of care and good practice in day-to-day service delivery.
• The learning infrastructure, backed by the account manager and knowledge base, ensures collective duty of care and strategic learning, linking your work to social value, risk, and accountability.
• Where needed, the two can be linked through data flows or APIs, so that aggregate insights from case management can feed into higher-level learning, and learning can in turn inform how cases are managed and prioritised.
In short:
• Case management systems help you look after each person.
• Learning infrastructure helps you learn with everyone involved and steer the whole system in a safer, more equitable direction, with a human partner and a tested social science backbone alongside the technology.
You can use Salesforce or an equivalent platform for parts of this work. Many organisations do. The real question is not “Salesforce or this platform?” but “What is each tool optimised to do, and what support and knowledge base come with it?”
What general systems like Salesforce are good at
Tools such as Salesforce, Dynamics, or generic survey platforms are powerful when you need to:
• Build CRMs, fundraising systems, or simple MEL dashboards
• Track contacts, pipelines, and transactions
• Configure custom objects and workflows with a skilled technical team
• Integrate many internal tools into one environment
They are highly configurable, but largely empty by design. All the logic about stakeholders, social value, risk, and safeguards has to be designed and maintained by you or by consultants.
What this learning infrastructure does differently
The platform here is opinionated and purpose-built for social value and “do no harm” in nature-based and impact projects. It embeds:
• Stakeholder mapping and power analysis as a core starting point, not an add-on
• A Theory of Change module that explicitly connects revenue, activities, outcomes, risks, and safeguards
• A curated indicator and question bank, grounded in social science, aligned with relevant standards and ESG dimensions, and already tagged to stakeholders, themes, and outcomes
• A risk register that follows the project workflow and brings social, political, safeguarding, and operational risks into one place
• Survey builder and data collection tools designed for remote, low-connectivity contexts and participatory processes
• Reporting and analysis tuned to social MRV (Measurement, Reporting, and Verification), rather than only to internal performance metrics
In other words, you are not just getting a database. You are getting a methodology embedded in the tool.
Two further pillars: the human account manager and the sociological knowledge base
There are two additional elements that generic platforms usually do not provide:
1. Human account manager
• Works alongside your team to design workflows, surveys, and reporting that are realistic and ethical
• Supports interpretation of results, rather than leaving teams with dashboards that are difficult to read
• Helps ensure that learning and “do no harm” are actually integrated into decision-making, not just reported upwards
1. Sociological knowledge base
• Underpins the platform’s themes, indicators, and question sets with social science and lived-experience insights
• Bakes in thinking about power, equity, relational well-being, and risk, rather than treating them as optional extras
• Evolves over time as more projects, sectors, and contexts are added, so that each client benefits from a growing body of practice-based evidence
This combination of tool + human support + knowledge base is what differentiates the offer from a configurable but empty shell.
Can Salesforce and this platform work together?
Yes. In many cases the strongest setup is:
• Use Salesforce (or similar) for CRM, fundraising, and internal operations.
• Use the learning infrastructure, account manager, and knowledge base for stakeholder-centred social measurement, risk, and learning.
• Connect them via APIs or data exports, so that social insights can inform strategy, investor reporting, and organisational performance.
You could try to rebuild all of this on Salesforce or an equivalent platform, but you would need to design the methodology, standards mapping, social science logic, and support model from scratch. The learning infrastructure gives you that purpose-built backbone and a human partner, so that your team can focus on learning and acting, rather than constantly reinventing the system underneath it.

bottom of page