Common Questions
Frequently Asked Questions
Evaluation infrastructure built for the people doing the work
About Emergence Field Labs
Emergence Field Labs (EFL) is an evaluation and learning infrastructure organization built for nonprofits, grassroots organizations, and social sector practitioners. We build AI-powered tools that make rigorous, community-controlled evaluation accessible to any organization, not just those with research teams or big evaluation budgets.
The premise is simple: social systems fail not because of bad people or bad intentions, but because of weak feedback loops. Organizations collect data constantly but rarely turn it into learning. EFL exists to fix that, to build the infrastructure that helps communities structure their own experience, own their own data, and participate in a network of collective knowledge.
EFL is built for organizations doing the work: program managers, practitioners, field staff, and small evaluation teams inside nonprofits, foundations, civic organizations, community development groups, and academics. Specifically:
- Organizations and individuals that need to measure their impact but lack a full-time research team
- Programs that serve multiple funders and need to attribute data across grants without duplicating work
- Grassroots and community-based organizations that want to own and control their evaluation data
- Networks and hubs that want to aggregate learning across member organizations while preserving local data sovereignty
- International development and peacebuilding organizations working in complex, multi-stakeholder environments
You don't need to be an evaluator or a data scientist. EFL is designed for practitioners, not methodologists.
The social sector has a measurement problem, but it's not the one most people think. The sector doesn't lack data. It lacks infrastructure that actually turns data into learning. Most organizations are stuck in a loop: collect data to justify grants already written, produce reports that don't inform decisions, and watch institutional knowledge walk out the door when staff leave.
At the same time, the tools available (expensive survey platforms, siloed databases, generic analytics software) were built for corporate research teams, not community organizations. They're hard to use, they extract value, and they own your data.
EFL was founded by practitioners with sixteen years of experience in international development and monitoring & evaluation, including field research across 30+ countries and coordinating the evacuation of 1,200 vulnerable Afghans after Kabul fell in 2021. We built EFL because we lived the problem. We know what happens when communities generate data that serves funder narratives instead of community learning.
Most evaluation tools solve a data collection problem. EFL solves a learning infrastructure problem. That means:
- Your data is yours — permanently. We don't sell it, use it to track you, or share it for advertising. It belongs to your organization from the moment you create it.
- AI assists, it doesn't replace. MERLin, our AI research assistant, is designed to build your organization's evaluative capacity, not substitute for it. Every feature is built to keep human judgment at the center.
- Qualitative and quantitative are treated as co-equal evidence. Field observations, voice notes, and open-ended responses aren't footnotes. They're first-class intelligence.
- The platform is built for the field, not the boardroom with voice-first collection, low-friction design, and mobile-accessible for populations with limited text access.
- Network learning without extraction. Organizations can learn from each other's work without surrendering their data.
We're in beta. We're honest about what's built vs. what's on the roadmap. If you want the plain-language answer on current vs. planned features, just ask.
The Platform
EFL is an end-to-end evaluation platform. That means it supports the full research and learning lifecycle, from designing a framework, to building and deploying surveys, to analyzing results and generating learning. The core modules are:
- Framework Builder — AI-assisted research design that turns your program logic into a structured, measurable framework
- Survey Design & Distribution — Generates quantitative and qualitative survey instruments directly from your framework
- Field Notes — Captures real-time qualitative intelligence from the field via voice or text
- Insights and Learning Dashboard — Real-time analysis integrating both quantitative data and qualitative insights and learning
Each module is connected. Your framework drives your survey design. Your survey feeds your dashboard. Your dashboard connects back to your framework. That's not an accident; that's what a learning system looks like.
The Framework Builder is where evaluation starts. It's an AI-assisted conversation that helps your organization design a structured research framework, the backbone that everything else connects to.
The first framework built into EFL is the Theory of Change, structured in an IF/THEN/BECAUSE format. The BECAUSE statement is the heart of it: it forces your team to surface and make explicit the assumptions you're making about how change happens. Most evaluation failures begin with untested assumptions. EFL makes those assumptions visible and testable.
The Framework Builder isn't just a form. MERLin, our AI assistant, asks clarifying questions, flags logical gaps, distinguishes activities from outcomes, and helps your team develop a coherent causal story. The goal is to build your team's evaluative thinking, not to produce a beautiful document that no one uses.
Theory of Change is the first framework available. EFL's roadmap includes 30+ frameworks across evaluation families, from Logframes and Logic Models to Developmental Evaluation, Outcome Harvesting, and more.
Once your framework is built, EFL automatically generates both quantitative and qualitative survey instruments aligned to your indicators and outcomes. You don't start from a blank screen; you start from your theory of change.
- Branching logic for adaptive surveys (if/then question sequences)
- Voice-first collection for populations with low literacy or limited text access
- Multilingual delivery
- Closed distribution to known respondent lists with unique identifiers, no open survey links that invite response manipulation
- Grant-specific tagging so data from multi-funder programs can be attributed without duplicating surveys
- Trainer/admin-level metadata capture (so a 12-year-old doesn't have to write their organization name correctly)
The Learning Dashboard is not a static report. It's a living, adaptive view of what's shifting in your program, integrating quantitative data and qualitative insights in one place. Two primary views:
- Dashboard — Indicator Overview, Theory of Change & Assumptions tracking, Data Story, and a Learning Environment with natural language AI querying and structured drill-down exploration
- Narrative Story — Data journalism-style synthesis with embedded visualizations, AI-surfaced qualitative theme clusters, and chapter-based narrative intelligence
The underlying design principle: a dashboard that only shows you numbers is a reporting tool, not a learning tool. EFL's dashboard is built to support four types of knowing: factual (what happened), procedural (what patterns emerge), perspectival (how does this look from different vantage points), and participatory (what does your team make of this together).
The AI proactively surfaces patterns you didn't ask about: anomalies, connections across data types, questions your framework may not be tracking. Every session is designed to generate at least one insight the user didn't think to look for.
Field Notes is EFL's real-time qualitative intelligence layer. It lets program staff capture observations, context, and reflections from the field, via voice or text, and integrates those insights directly into the Learning Dashboard alongside quantitative data.
Most systems capture what gets measured. They miss what gets observed. Field Notes captures early indicators, implementation barriers, context, and the lived experience of practitioners doing the work. The AI structures raw input after capture, so staff focus on the moment, not on form-filling.
The design principle: ultra-low friction. Capture first. Structure later. No bureaucracy.
MERLin — The AI Research Assistant
MERLin is EFL's AI research assistant, embedded across all five platform stages: Diagnostic, Framework Builder, Research Approach, Survey Design, and Results & Analysis. It's not a chatbot. It's a structured thinking partner that guides you through the evaluation lifecycle.
MERLin is built on Anthropic's Claude models and governed by a detailed prompt specification that controls its behavior across every section of the platform.
MERLin is designed around three core behavioral commitments:
- Human-first. Users always write or respond before MERLin engages. You think first. MERLin responds to what you've said, not what it assumes you mean.
- Socratic, not prescriptive. MERLin asks questions to surface your thinking. It doesn't hand you frameworks; it helps you build them. That deliberate friction is the product.
- Intentional friction at high-stakes decisions. When you're making consequential choices about your evaluation framework or assumptions, MERLin slows the process down. Speed is not the goal. Learning is.
This is rooted in a core design principle: the difference between agency and autonomy. Agency is the capacity to achieve outcomes. Autonomy is the right to set the criteria by which outcomes are judged. EFL is designed to build your organization's agency, its evaluative capacity, not to substitute for your judgment.
It can help you build it. It won't build it for you, and that's by design.
One of the central risks in AI-assisted evaluation is what we call constitutional drift: when AI efficiency gradually displaces your organization's own standards, values, and judgment. An evaluation framework that MERLin produced wholesale would reflect what MERLin thinks your program should look like, not what your community actually believes about how change happens.
That's not evaluation. That's performance. MERLin is designed to make you a better evaluator, not to make evaluators unnecessary.
Theory of Change is the first fully built framework in EFL's platform. MERLin's eventual diagnostic architecture will support 30+ frameworks across six framework families:
- Causal Logic — Theory of Change, Logframe, Logic Model, Results Framework
- Participatory & Community-Centered — Most Significant Change, Outcome Harvesting, Appreciative Inquiry, ABCD
- Complexity-Aware & Adaptive — Developmental Evaluation, Realist Evaluation, Contribution Analysis, Outcome Mapping
- Systems-Level — Systems Change Evaluation, Collective Impact, Causal Loop Diagramming
- Qualitative & Interpretive — Case Study, Narrative Inquiry, Phenomenological Inquiry
- Experimental — RCT, Quasi-Experimental, Pre-Post / Longitudinal Cohort
MERLin will diagnose which framework best fits your situation, based on your program type, funder requirements, organizational capacity, and the nature of the change you're working toward.
Data Ownership & Governance
Your organization owns your data. Not EFL. Not funders. Not the platform.
EFL operates a dual structure: EFL Inc. (the for-profit company that builds and maintains the tools) and the EFL Data Trust (a nonprofit entity that serves as the legal data trustee for all organizations on the platform). Your data agreement is with the nonprofit, not the operating company.
EFL has no competing interest in your data. We don't monetize it, analyze it for our own purposes, or share it with anyone you haven't explicitly authorized. The Data Trust is structured to make sure that remains true even if EFL's business circumstances change.
We want to be honest about this rather than over-promise. EFL's current technical architecture is centralized, built on Firebase. That means EFL engineers do have technical access to the underlying database. We can't claim architectural separation from that access.
What we can commit to:
- All admin access is logged with full audit trails
- Data access policies are governed by the nonprofit Data Trust, not the operating company
- No community data is used for training AI models or shared with third parties without explicit organizational consent
- Access for technical purposes is only permitted when explicitly authorized by the client organization
The roadmap moves toward federated architecture, where each organization controls their own data store and analytical queries are processed without raw data moving between nodes. That's where we're building. We'll tell you honestly when we get there.
Security, Privacy & Data Commitments
No. Never. This is not a policy preference that could change with a new business model or a new leadership team. It is a structural commitment encoded into EFL's governance architecture.
EFL will never sell, license, or transfer your organization's data to any of the following, under any circumstances:
- Government agencies, local, state, federal, or foreign
- Law enforcement or intelligence agencies
- Corporations, advertisers, or marketing firms
- Academic researchers or institutions
- Consulting firms, think tanks, or policy organizations
- Funders, including funders of your own programs
- Other nonprofits or civil society organizations
- Any third party not explicitly authorized in writing by your organization
The EFL Data Trust — the nonprofit entity that holds your data — has legal authority over how data is handled. The operating company builds the tools, but cannot access or commercialize community data outside of what the Data Trust permits. This isn't a policy preference that could quietly change; it's a structural constraint built into how EFL is organized.
No. Your organization's data will never be used to train any AI model that EFL uses.
EFL accesses AI providers exclusively through commercial APIs under paid plans. We only work with providers whose commercial terms explicitly prohibit using customer data for model training. That commitment applies to every provider we use — now and in the future.
If a provider ever changes their policy, we will either renegotiate under a Zero Data Retention agreement, switch providers, or remove that integration. We will notify affected organizations before any such change takes effect.
Access is governed by role-based permissions and full audit logging. Within EFL:
- Your organization's administrators control who on your team can access your data and at what permission level
- EFL engineers have technical access to the underlying database in the current Firebase-centralized architecture; we do not hide this fact
- All EFL staff access to any client data is logged with a full audit trail, and access is governed by the EFL Data Trust's policies, not by the operating company's discretion
- No EFL employee may access client data for commercial, analytical, or training purposes — only for technical support explicitly requested by the client organization
As EFL moves toward a federated architecture, even technical staff access will be architecturally constrained: queries will be submitted to data stores without raw data ever moving to a central server. We'll tell you when that architecture is live.
EFL collects only what's necessary to run the platform. We don't build profiles on your respondents, track behavior for advertising, or pull in data from outside sources.
What we collect:
- Organizational profile data: name, sector, program descriptions, geographic context
- Evaluation framework content: Theory of Change, indicators, assumptions
- Survey instruments and response data: structured and qualitative responses
- Field notes: voice recordings and text notes captured by program staff
- User account data: email addresses and role assignments
What we do not collect:
- No personally identifiable information about survey respondents beyond what your organization explicitly chooses to capture
- No device tracking, behavioral profiling, or advertising data of any kind
- No data from third-party sources about your communities or beneficiaries
EFL currently runs on Firebase (Google Cloud Infrastructure), hosted in the United States. Organizations with specific data residency needs should contact us to discuss options.
Many EFL users work in contexts where data exposure carries real risk: democracy and civic programs, peacebuilding work, programs serving undocumented populations, whistleblower support, and post-conflict community development. We design for those users, not against them.
- Surveys go to known respondent lists with unique identifiers. No open survey links that expose respondent identity through participation alone.
- We build friction into data collection design that encourages organizations to collect only what they need for learning, not for reporting performance.
- Our long-term federated architecture distributes data sovereignty to each organization. A federated system is a security system; you can't extract what isn't centralized.
- We design toward minimizing what we hold rather than maximizing what we can collect. The less we have, the less can be breached, subpoenaed, or misused.
If your program works with populations at elevated risk and you want to discuss your specific threat model before using EFL, contact us. These conversations make us better.
Getting Started
EFL is currently in beta with a select group of pilot organizations. Onboarding follows a structured sequence:
- Intake conversation with MERLin to map your organization's context, programs, and evaluation needs
- Framework Builder session to design your Theory of Change and indicator framework
- Survey instrument generation from your framework
- Platform configuration including grant tagging, cohort setup, and distribution lists
- Dashboard access once data collection begins
The process is designed so that each phase delivers standalone value. Even if circumstances change, you leave each stage with something real and usable.
Still have questions? We're in beta and we talk to everyone
Request a Demo