I still remember one of the first times I realized the political power of being a person of color. I was in college with a diverse group of students of color from a range of cultural, ethnic, and racial heritages, and we were sharing our experiences around race. At first I wasn’t sure that I belonged. So many of the other students had experiences that I hadn’t had; even many of the other Asian students. What kept me there, though, were certain moments of kinship through shared experience and the universal sentiment in the room that we had all been stereotyped before. After finally getting comfortable in that space, I remember an older student sharing that whereas people often assumed it was easier for them to get into school, they felt as if they had to work twice as hard to get half as far. Looking back I can’t remember which was louder, the affirmation from other students in the room or my own internal “ah-ha!”
Even now, two decades and four higher education degrees later, I feel like I’m running on a mobius strip treadmill. I have years of experience as a community-based practitioner, educator, and researcher. I’ve participated in designing, conducting, and leading numerous research studies involving a broad range of methods and frameworks. Yet for everything I have done and still do, whenever someone asks me to help them with evaluating a community organization and whenever I submit a final report to an organization or funder, I still feel inadequate. I still wonder, “are you sure? Am I enough? Aren’t I an imposter?” And this happens even as I work my tail off going at 200%.
Conducting evaluation research for communities of color only amplifies these concerns. It’s not just the hollowing echo of “am I good enough?” It’s now “am I good enough for my community and the communities I care the most about?” It’s me recognizing the horrific history of what has happened when researchers with academic degrees and titles enter communities of color. There have been those cases of intentional harm done; other cases of recognition and benefits denied; and still others that presume what is good for us; and finally those cases that provide evidence for what benefits us and yet we continue to be experimented on and cannot receive adequate support towards the justice we need.
As daunting as this work is, though, I can’t help it: community-based research and education for communities of color are everything to me. Spending time developing research goals, conducting conversational interviews, and observing community processes fills my soul and excites my mind. Making sense of the data with fellow community members and testing out our co-designed questions, processes, and findings with partners helps me to not only stay accountable to our communities but also leads me to be more confident in our findings. To date, we are three years into our evaluation efforts, and we are still developing our framework and evaluation program. However, here are a few key throughlines that we have come to recognize:
Research for Communities Cannot Be Controlled. Early in our efforts to evaluate RVC we repeatedly heard from people outside the organization that we needed to strive for the golden standard of evaluation research: randomized controlled trials (RCTs). RCTs are great in highly contained settings where college students needing extra spending money and medical patients seeking treatment can be randomly assigned to and blindly treated with Option A (the medicine) or Option B (the placebo). These treatments tend to be quite brief (minutes or even seconds) with only researchers knowing which treatment people received and whether there were any differences in any easily measured outcomes. Not only do these participants need to be “blind” to their treatment condition, the studies also need to be controlled enough so that there can be no other explanation for the differences in the easily-measured outcomes than whether participants received Option A or Option B. For every aspect of loss of control, non-randomness, and non-blindness (i.e. if participants can make choices for themselves and know what is being done to them), the “value” of your research diminishes according to classical evaluation. While blindness, control, and randomness may be core values for classical research, these do not align with our RVC values of transparency, self-determination, and liberation. While classic research privileges the researchers’ desire for control, we wish to prioritize research that is community-led, community-defined, and community-serving first. This is especially critical given the history of attempts to control communities of color.
Causality in Communities Is More Complicated than A causes B. Again, the goal of classic evaluation research is to show that a change in A results in a change in B. Using this dominant culture paradigm, we would seek to evaluate if: 1) a change in RVC (e.g. its creation) results in a change in community leadership, 2) a change in community leadership results in a change in community organizations, and 3) a change in community organizations results in a change in community well-being. This might make for a picturesque and easy cascading graphics (see Model 1 below). Our experience and evaluation efforts, however, show something far more complex and richer. RVC is seeking to build capacity for communities of color, organizations serving communities of color, and community leaders of color. RVC is also constituted of all of these and each of these influence RVC and influence one another (see Model 2 below). In fact, the primary way that RVC influences each of these are through the others, and RVC has also created spaces where leaders of color, organizations of color, and communities of color come together to support and learn from one another. And finally, as RVC continues to grow in terms of staff size and staff tenure, we seek to change ourselves as part of our theory of change. Ultimately, whereas the previously described change model had 3 causal arrows to assess, this new model has 16 pathways for change.
Additionally, another stakeholder present in both models are traditional power-holders, including funders. In Model 1, dominant culture is invested in “helping” or even “saving” communities of color by funding RVC and seeking evaluations that evidence a high “return on investment” to the funders who can receive causal credit for all change. If Model 1 works, dominant culture is validated and can remain in place. In Model 2, funders can appreciate how complicated work is in communities of color, especially as we seek to fight social inequities caused by dominant culture. Investing in our work then represents the beginning of restorative justice between funders and communities color which can then continue by accepting and further supporting the radical ways in which evaluation needs to change for communities of color. Classical evaluations of Model 1 in communities of color would hope to find that communities of color cannot change for the better unless funders support RVC which will result in causal change. Community-driven evaluations of Model 2 would celebrate the many ways in which RVC is integrated into the cultural, community, and nonprofit ecosystems of Rainier Valley. The range of RVC efforts, programs, activities, and partners, and the ongoing growth of these activities in both breadth and depth further evidence the degree to which RVC is willing to work twice as hard, but we won’t be satisfied with only half the impact. There is more still to come!
Community wellbeing cannot be measured by test scores. Along the way, we have reviewed, tried out, and even developed our own psychometric scales (“psychometric” is an academic term to say that we’re measuring what you’re thinking). Many of these scales come from reputable organizations and have been tried in other contexts with a range of success. What we have found through our own experience in using these scales is that we need to step back and ask: why and when should we use these scales? Some of the more well-reputed scales were used in large studies involving hundreds of participants from nonprofits. Staff members from these nonprofits were likely members of larger organizations with high degrees of academic English proficiency. Reviewing our results from these surveys, we couldn’t determine if RVC was having an effect or not. This might have been because of our sample size, as we partnered with a dozen organizations instead of hundreds. Or was it the time of year that people were taking the survey? Or was it that people understood the questions similarly or differently than in the past? Or was it that the organizations in our comparison group were not perfectly comparable to our partner organizations? From these scales, we identified a smaller set of questions and then tailored them to be more specific to the outcomes we cared the most about. We also turned most of our questions from quantitative questions (e.g. “On a scale from 1 to 5 … ”) to qualitative ones (e.g. “In your own words …). What competencies were people developing and how were they able to put them into practice? What additional opportunities did people want or pursue to develop strengths or to address challenges individually or collectively with other colleagues from within or outside of their organizations? And again, because of our commitment to evaluation research for the community, our partners did not have to be worried about us using the data to shame them for our own benefit. Instead, when we had questions regarding our findings, we took our interpretations and questions back to our partners to ask for their opinions of our data and findings. Our data should necessarily make sense to us first.
Evaluation Methods in One Community of Color Don’t Always Translate to Other Communities of Color. Shifting our evaluation processes also allowed us to hear honest feedback about our evaluation processes from our partners. During one such instance, two RVC fellows were part of a community co-design process to survey Southeast Seattle families about how public schools engaged with them. Part of this process involved asking for community member feedback at each stage of the process. At one point, we were feeling proud of ourselves for translating our survey into nine world languages. One RVC fellow reported back from her community, however, saying that even with our efforts to translate the survey, our efforts fell short. It was still clear that our survey was an American survey with American questions asked in an American manner even if it had been translated word for word into the language of this community of color. Doing justice to this community would mean inviting a small group of people to sit down with us, serving food, and asking a few open-ended questions before engaging in a longer open-ended conversation that. Learning from this feedback, we provided multiple means for members of this community to participate in our survey including having more open-ended conversations that provided far richer data than traditional test scores.
Again, these are only a few initial reflections on our efforts to understand ourselves for ourselves and to do justice by our communities of color. After publishing this post, we may hear back from our folks to adjust our claims and processes again. What we know is that the more time we spend at RVC reflecting on what we have done to understand where we are and where we might go, the more times we tell this story, the more powerful our stories and our communities become. We look forward to continuing to share our findings publicly, and if you are interested in joining our efforts, please let us know!
Sign up for RVC’s mailing list and get the latest news. Don’t worry, we won’t email too often. You can also sign up to follow RVC’s blog by email. Enter your email address below and get notice of awesome new posts each Wednesday morning. Unsubscribe anytime.