After a couple-year hiatus, I’m back to continue the refresher series on Results-Based Accountability (RBA)!
In the last installment, we began exploring RBA and its five key questions, starting with, “How are we doing?” Answering this question requires zooming out and looking at community-level information, or population indicators.
These population indicators provide a satellite view of how the population you aim to benefit is doing on the conditions of well-being you’re trying to achieve with them. Ideally, you should have at least one population indicator for each goal/result statement/condition of well-being. (Reminder: we track several population indicators relating to our focus areas on the TowerDATA page of our website).
Previously, I mentioned that one of our goals is that, “Communities value persons with learning disabilities and accommodate their needs.” One population indicator we track through a custom survey every two years is the percentage of community members who believe that employers provide enough support or accommodations for people with learning disabilities. Between 2017 and 2021, our Eastern Massachusetts surveys show a roughly five percentage-point decrease on this measure, raising questions about the causes.
This brings us to the second RBA question: “What’s the story?” Why are we seeing this decline in Eastern Massachusetts? Was there a high-profile discrimination case relating to denial of a reasonable accommodation that raised people’s awareness of this issue? Was new legislation passed? Are more people disclosing specific learning disabilities increasing sensitivity to these issues? Would the decrease have been bigger but for our grants combined with others’ efforts? Is the decrease just sampling error?
This second question highlights two key points: 1) there’s significant complexity behind even simple population indicators, and 2) it’s impossible to isolate your impact on an issue at the population level.
First, complexity. Many forces influence the issues we care about. Population indicators reflect the net result of various forces conspiring to advance or oppose our cause. Some are deliberate efforts by people and organizations, others are built into culture, policy, and social institutions. Gaining a comprehensive understanding of these forces is, frankly, overwhelming. We need to do our best to tease out the forces that are most important and those that are most amenable to change. We’ll revisit this in the next post.
Second, due to this complexity, it’s nearly impossible to make a direct link between your work and community-level changes. At best, you can consider how your efforts have helped to move things in a positive direction or prevented the situation from worsening. This can be unsatisfying, since we all want credit for a job well done. Our work seldom touches every community member affected by the issues we care about; population indicators include everyone in a community, whether we work with them directly or not. We can’t be accountable for the people we don’t serve!
Ultimately, population accountability means thinking in terms of contribution, not attribution. We contribute to population/community outcomes, but we can’t take full credit or blame.
So, what do we do with this? We ask our next RBA questions: What works? and Who are our partners in this work? That’s where we’re headed in the next post.
Note: You don’t have to field your own surveys to get at population indicators. Other agencies often collect data for their own purposes, which might also suit your needs. Be sure to look at local, state, and federal government agencies to see what information they’re collecting and whether it might suit your purpose. Sometimes nonprofit organizations and/or foundations collect this information as well. Gathering your own data might be a necessary last resort if nobody else is gathering relevant information at all (which might be a story to consider in itself) or in a form that doesn’t really work for you.