An interview with Sally Leiderman

Dr. Kelly Hannum, founder of Aligned Impact, interviewed Sally as part of our Be Lumin-Us guest editor series. In this wide-ranging interview, Sally covers everything from her start in evaluation to racial equity and what she’d like folks in the field to know about how evaluative practice needs to shift.


Meet Sally

Sally is the President and one of the founders of a nationally recognized non-profit organization - the Center for Assessment and Policy Development - with a mission to improve outcomes for children, youth, families and older adults, by helping to build the capacities of those who do the day to day work on their behalf.   

She has designed and implemented many multi-site, multi-year process and impact evaluations, including those involving traditional and non-traditional evaluation methods. She pioneered the use of logic model reporting forms to help community groups track their progress and document their successes and challenges in community change efforts. 

Ms. Leiderman is also a nationally respected writer in the areas of civic engagement, leadership, structural and institutional racism, white privilege and community building. With Maggie Potapchuk, Shakti Butler and Stephanie Halbert Jones, she is a co-creator of  www.racialequitytools.org and Transforming White Privilege, a 21st Century Leadership Capacity curriculum. She is also a co-author of Flipping the Script: White Privilege and Community Building, along with Maggie Potapchuk, Donna Bivens and Barbara Major.


Kelly: How did you get into evaluation work?

Sally: It wasn’t planned. It was more a combination of needing to work as a teenager (I was raised by a single mother of four and money was very tight growing up), some unearned white privilege, and a really good public school education. This was in the sixties. Through the mother of one of our high school friends, my older sister and I got a summer job interviewing supermarket shoppers – a five question survey, I believe. One big takeaway was that I was employable. Another was that surveys were a thing that existed. And third was, sadly, that you couldn’t always trust survey data, because some of the other interviewers explained ways of making the job easier by faking additional surveys. From that early experience, I developed a healthy skepticism about data. 

Around that time, I also was looking for a way to connect with my father, who wasn’t much a part of my and my sisters’ day-to-day lives. I knew he had a small business (the Marketing Science Institute), so I asked him for a part-time job, thinking that might be my opportunity to learn more about him. He made it clear that I would have to earn my keep, but took me on for a few weeks. I typed and proofed stuff, and learned about the business of research – basic things, like what RFPs were, and how to work with clients. That has turned out to be an invaluable set of information, and an incredible piece of unearned privilege.

When I was in college, I got a job at a survey research center because of my prior experience. I went to grad school but didn’t finish. I was working full-time, and I had kids, so it felt too overwhelming. I was lucky that, at the time, having an advanced degree didn’t matter as much as it does now. I could do the work and had the experience. I was self-taught, privileged, had a bit of luck, and evaluation work played to my strengths. The field was wide open for someone like me. I’m not sure that that’s still the case. 

Early on I got a job with Booz Allen Hamilton. I was the study director for a 6 million-dollar USDA project. I set up offices in Alaska, Puerto Rico, and in the continental United States. I was 26. Around 1976 or 1977, I decided I could use my skills to do good by doing social policy work. I changed jobs a few times and eventually went out on my own. I am so grateful that people pay me to look into things I’m interested in. It’s amazing. 

Though it wasn’t a clear path at first, I developed a life-long passion to do applied research that might contribute to making people’s lives better. Growing up in a working class household and neighborhood, I learned early on what economic inequity looked like, and that you could play by the rules and work hard, and still not have food on the table. From a young age, I could understand the privilege I was outside of by virtue of being poor, female, non-Christian and not part of a two-parent household. That understanding motivated me to look for work that could contribute to a more fair world. It was much later that I was taught by very patient people about all the privilege I did have - whiteness, health, access to great public schools, the benefits of the GI bill (which funded my father’s education) and the roof over our heads growing up.

Kelly: When and how did the connection between racial equity and evaluation become clear to you?

Sally: It was in the late 1980s. We had started CAPD, and I was working on a project for the AT&T Foundation with Gina Warren when she went on to the Levi Strauss Foundation. The Levi Strauss Foundation was starting a program on institutional racism (called Project Change). She asked if I wanted to be the evaluator. I was like yeah, of course, that sounds great. CAPD had white and black staff – so why not? 

While Project Change was underway, CAPD was also approached to evaluate Healing the Heart of Diversity, which was a retreat series for cohorts of people doing diversity work in organizations, in universities, and in communities.

A few months into the work, Shirley Strong, then the Director of Project Change, and Dr. Patricia Harbor, the Director of Healing the Heart of Diversity, both made it clear to me that I didn’t know what I was doing when it came to race. They could easily see that I didn’t have a grounding in the history, the institutional structures, the policies, or the day-to-day grind of racism (we now call that microaggressions). They were tough and loving with me. They pushed me to take responsibility for my own learning, critiqued my work, offered tools and experiences that I didn’t know existed. I hope they feel it was worth it, because for me, it was transformative. Once your eyes are open, they are open. Racism runs through every level of policy work, etc. It’s built in everywhere; it’s how we got here. 

Around that time, the Annie E. Casey Foundation asked me, Maggie Potapchuk, Donna Bivens, and Barbara Major to write something about white privilege for their program officers and staff. We wrote Flipping the Script. This is when Barbara Major explained to me what’s wrong with evaluation from a racial perspective. For example, she knew that faith-based programs worked in her community, but she needed a white evaluator to back it up. She explained that when she started her work, she needed a white accountant, a white lawyer, etc. to prove that she was managing the money correctly, doing things correctly. Similarly, she needed a white evaluator to give the work credibility. She had to spend resources on evaluations, and white evaluators specifically, that were telling her things she already knew. 

At best, evaluation was neutral, but ideally, shouldn’t evaluation be on the side of equity?

When you frame evaluation that way, it’s a whole different set of questions: Where’s the power? Where’s the privilege? What/who is credible? Whose perspective counts? Whose ideas of success matters? It broadened our thinking. It got me thinking about whether or not you can be an evaluator and not be part of the systems of racism. 

I ended up focusing on white privilege for a few reasons. One is because I am white, and whenever I asked people of color (as many white people do) what I should do to counter racism, the answer was “teach your own.” I took that really seriously. Another is that you can’t work on interrupting systems of racism or have a conversation about race and not talk about whiteness. 

Kelly: Were there repercussions for CAPD? 

Sally: The repercussions for us were minimal. Initially, we got pushback. Some people left the organization and there was work we didn’t get. “White Privilege” was a difficult term at that time – people said you can’t say that, people will shut down if you say that. But we also got credit. People came to us because the network said these people are good evaluators and have credibility.  And while we got pushback, it’s not like I was getting shot at: other people were taking stands that were far more dangerous. That gives you perspective. 

Kelly: What changed in terms of your thinking about evaluation as a result of applying an equity lens? 

Sally: It amplified my skepticism of data, particularly data that aren’t disaggregated by race. It opened up a lot of thinking about the power dynamics in the processes and relationships within evaluation, and the obligation to put those on the table for all parties to consider and try to equalize or shift to those with the least power going in. 

It also fundamentally challenged the questions we asked about evaluation itself. For example, what constitutes success and who gets to say so?  Who “owns” the data? Who gets to see data first, so that any mistakes the evaluator makes are corrected before false impressions are made in ways that often can’t be unmade? 

I also started thinking much more about the “political” nature of evaluation – in terms of how findings are used to support or reduce funding opportunities, and what level of rigor and integrity supports good decision making.  For example, what is the ethical imperative for evaluators to say, “that question isn’t answerable in the timeframe of this evaluation” or “it isn’t possible to accurately predict the value of this initiative given the complexity of the issues and the lack of certain voices?” To do that requires courage and humility, as well as power-shifting. 

I also think really good work is happening – like systems evaluations and work in complexity and ethnography. I don’t think we have a good handle on how much or where to invest in terms of evaluation. 

I think we should invest heavily in nuanced, multiple-ways-of-knowing evaluations in areas where we could make people’s lives much better or prevent them from being much worse.

For some programs, we should back off evidence-based approaches and making things sound evidence-based when they aren’t, and probably can’t be – at least not anytime soon. Sometimes it’s okay to do something because people say it’s a good thing to do. We don’t have to measure everything. We could focus our measuring in places where it is appropriate and can do more good.  

Kelly: What are some of the positive things you’re seeing? 

Sally: I get asked to help people think like an evaluator. Groups can do that well, and they can do that for themselves. Helping people ask meaningful questions and helping them think through how to get information to help answer those questions is a burgeoning area. I like when people who weren’t feeling free to ask the questions they really wanted to have answered, start asking those questions. That’s the most important thing as an evaluator – asking good questions. The core of evaluation though is thinking about what to ask, ways to get information, figuring out the different perspectives. Personally, it’s the meaning-making and the applications that I get most excited about. 

I like that people are talking about race, and calling out band-aid approaches that don’t really address core issues.

Kelly: When I look at most evaluation education programs, the focus is on methods. How do you think that influences practice? 

Sally: I have mixed feelings. I really like methodologies. It’s part of evaluation. But the technical should not drive the political. We need to think about what’s going to happen with results. How precise do they need to be? What will stakeholders consider credible data? 

For years, I wanted to teach a methods course using Studs Terkel’s book Working: People Talk about What They Do All Day and How They Feel about What They Do and the longitudinal dataset about people at work. I’d have people look at what they could learn from each approach and reflect on how they were different. I’d want people to really understand the kind of data that different methods yielded and what you can and can’t answer with different kinds of data. 

I’m opposed to pretending we have learned something when we really haven’t.

Like in the old days there would be a claim like “This approach works for kids” then you’d question “Does it work for black kids?” and the answer would be “Well, there were black kids in the sample” and it’d be like 2% of the sample. We’re doing better at digging into the details of what works for whom and in what contexts but there’s more work to do. 

I’m really affected by Winners Take All by Anand Giridharadas. It goes back to the notion that people have been working the system, and it’s time to make the system work. We need to listen to different people to make that happen.

We need to recognize that evaluation has made some things worse. 

We should have a minimum level of rigor. We need to address systemic issues – like race and racism. We have to have multiple voices and multiple ways to make meaning. We have to get away from the notion that “if I follow this methodology, then it is rigorous”. We have to identify the minimum elements of an evaluation we can trust and then pay for it. 

We have to keep raising awareness, and promoting people who have clear awareness about what it takes to craft good questions and gather information that leads to better thinking and better decision-making – and who understand the larger systemic issues – like racism and inequity. I think that’s something EEI is doing, advocating for that shift. We need to understand the weaknesses and tradeoffs of different approaches. We have to pay for that, and we have to be sure it’s worth it. You can’t just trust an algorithm, you have to look at how and why it was created, who created it, and how it is being used. 

I’ve made mistakes myself. When you know better you do better. I have a slide with things I’ve learned and the three main things are: 

  1. Figures don’t lie, but liars can figure. 

  2. The fact that something doesn’t work, doesn’t mean the opposite will work. 

  3. Data and stories don’t always align; when one of them is wrong, it’s not always the one you think. 

Kelly: What do wish more people knew about?

Sally: I wish people knew that there are a lot of people who would like to learn simple ways of thinking like an evaluator. The entry bar is not that high to do that. 

I wish people knew more about the flaws in data and how to work with that in transparent and useful ways.  

I wish people knew more about things like strategic questioning – Fran Peavey, Strategic Questioning: An Approach to Creating Personal and Social Change, 1997 see https://www.context.org/iclib/ic40/peavey/) The time it takes to come up with good questions is the most valuable time you can spend in an evaluation. 

I wish I knew more about how to share thoughts and data through movement and artistic means – that’s a real weakness of mine. 

I wish more people knew about false precision. We need more people to stand up and say you can’t know that and here’s why.

Get LitDr. Kelly Hannum