Episode Transcript
@0:35 - Alexandra Mannerings (Merakinos)
I am so excited to be joined by Cindy from Results Lab, and we are going to ask the question in this conversation that I think many of us forget to ask.
@0:50 - Cindy
But before we get there, Cindy, why don't you go ahead and introduce yourself? Yeah, thanks Alexandra. Appreciate you having me.
We've had some previous conversations. That's that we're kind of fun, good to continue. I'm Cindy Eby. I'm the founder and CEO of Results Lab.
And at Results Lab, we work with social sector organizations. I'm Merrily Non-Puffits, philanthropic foundations, and some government organizations on how they can deepen their impact using data.
So we're really focused on good data. Data use, but also good program design. And how do you think about what your impact is, what you're trying to achieve with those that you're serving, how you're going to do that, and then how you capture data on both of those things, what your results are, and how you're getting there to inform continuous improvements, to inform your journey with those that you're serving, and really focus on how Thanks for
So, do we have a cycle we've developed, a line captured transform that walks organizations through that and helps them do that to ensure we're getting to that data use phase.
@2:14 - Alexandra Mannerings (Merakinos)
So, quite sort of summary of what we do. I know and seeking that elusive, know, true actionable data utilization.
So, excellent. Well, thank you so much for joining us today. And, you know, you talked a lot about being at a high level about figuring out how do your programs work?
Are they working the right way? And how would we use data to improve on that? And so, I think a lot of organizations are very aware of their gaps in knowledge around evaluation, right?
They may not have a full-time evaluator. And so, they like to ask questions about how do we evaluate our programs?
And when we connected, we landed on a question that happened before that, which is how do you know if you are ready for evaluation?
And our Are you even ready for evaluation? So I'd like to back up to that part and say, what do you start with before you even try to design evaluation?
What do you need to have in place?
@3:11 - Cindy
Yeah. Yeah, it's a good, it's a very good, very important question. And a lot of organizations in my experience, my career, and then at Results Lab, a lot of organizations come to us and say, we want to do an evaluation, or what data should we collect?
And we can't answer that question unless we know what it is you're trying to do, and who you're trying to serve and how you're doing that.
And so readiness for evaluation, really a major precursor for that is, what is a really well-defined program and well-defined outcomes.
So like I said, with Our act cycle, it's important to think about who you're serving and what you're trying to accomplish with them and create a program that addresses that.
And, you know, lots of lots of organizations have the program they operate, but when you start to dig into what is it that you're doing for the people that you're serving, there's some fuzziness in there.
And really diving a bit more deeply into the design of the program and highly, highly advocate for input from the people that you're serving.
What is it that they need getting some feedback from them on what where they're trying to get to what services are most helpful is really important step in that process.
Program design is really important. So diving pretty deeply into that defining. Who you're serving, what the outcomes are, how you're going to get there with some really picky detail.
So how long do they need to be in your program? How frequently do they need the service that you provide?
What is happening in that program? What's the content? And we would call that your program model. And everything I just described is sort of the dosage of the programming.
People don't really like to think of it that way. It sounds little medicalized, which we don't want to do.
But like theoretically, on your experience, based on research that's out there, based on what your current client population says, what theoretically do you believe needs to happen to bring about the outcomes that you want to see?
And finding that really well. Because I'm going to be. Today you can test against that. So that's a big design and I'm inclusively designed and formed by research as an important step.
@6:10 - Alexandra Mannerings (Merakinos)
That's a great, I like the term dosage in that it gets to the point that if you're trying to say does add the whole headache, right?
need to say, well, like, how much advel are you taking? When are you taking it? And no one would think to try to decide if ad will help without setting those things.
But we often when we're trying to think about social interventions, we're very quick to skip that part of like, oh, we did things and that's what matters.
Like we did programs, we had people show up, we, you know, there was activity. And yet, like you said, that fuzziness, that lack of specificity around the activity makes it basically impossible to use data in any media way.
@6:56 - Cindy
Yeah, completely. And yet to use your ad-bel analogy, like, If you're depressed, Advil is probably not going to help you.
Or, you know, if you have a headache, is a partial Advil going to help you? Maybe not. You know, so knowing what the problem is, so challenges.
And getting a lot of input. I think program staff and nonprofits have great expertise. They have a lot of thoughts about.
Their experience in running the program. What is we call this an impact strategy, by the way, an impact strategy and program model.
But what is what do we think is really needed? What do our clients say is really needed? What does research say is really needed?
And building on that.
@7:47 - Alexandra Mannerings (Merakinos)
I like that trying. That idea of what does research say? What do our clients say?
@7:53 - Cindy
What does our experience and team say?
@7:56 - Alexandra Mannerings (Merakinos)
Yeah, right that you're using all three of those to try to try and get towards. Your best guess. And it's okay if it's a guess, right, like the point of using the data to then test that and say, is our guess going in the right direction or do we need to course correct?
@8:10 - Cindy
100% because often, like maybe someone looked at research at some point, but sometimes not. And in some ways, program staff will loop in the perspective of program participants, but really directly asking is important.
And that's often left out. you think about human-centered design, your design is going to be faulty if you're not hearing from the humans that are going to benefit from what you do.
to your point, continuous, like to improve a fine, it's your theory, it's your idea of what works well. You're going to
Work to improve it. That's any organization out there should be doing this continuously. Like, what's working? What's not working?
How do we improve it? And that's that kind of quality improvement set of work that also depends what you call evaluation.
I would say quality improvement also comes before evaluation. So if we're going to say evaluation is a one-time event, where someone comes in and looks at how the program is doing and what the outcomes are there.
It's doing, is it being implemented? Well, if evaluation is just doing that once, which often it is, then you need to be doing some continuous improvement before you do it more rigorous evaluation.
We can go down this tangent or not. There's something called an evidence continuum where it's five stages. It's have your impact strategy or a degree of change.
It's everything we just talked about based on expertise, based on research, based on client input, what's your theory about what's needed and what will work.
Stage two is we start to look at implementation. Are we doing it well? we delivering what we thought we'd deliver?
If someone needed six coaching sessions in six months, are they actually getting it? And then start to look at the short term outcomes in step three.
If you're not implementing well, there's no reason to look at outcomes. So look at if you're implementing well. Step three is look at outcomes.-posed a little bit.
Subsport and five are more rigorous. I would call evaluation. So quasi experimental designs and step four randomized control trials.
So experimental designs and step five. So if we're calling steps four and five evaluation, there's a lot to do before.
I'm capturing data using it for improvement. Are we implementing well, if not, why? You're doing those feedback loops and root cause analysis about if someone's not getting coaching sessions, why is that?
Could be something as simple as they can't make that time. need to change our time. could be something as complex as your content is wrong and it's not helping people so they're not coming in which case that's a bigger change.
So using data to implement well, to achieve outcomes well, and then the more rigorous study design is more appropriate, I would say.
@11:45 - Alexandra Mannerings (Merakinos)
I think these phases are a fantastic way of wrapping our heads around, like you said, all of trying to get the thing we want to happen.
And I think it makes sense to a lot of us that that idea of the theory of change. That's very central in a lot of nonprofit practices.
But the idea that then what comes out of that is that you should design something that you implement consistently.
And I think that we do want to jump to, how do we know what we're doing? It's working, right?
want to make sure we're doing the right things. We want to go straight to measuring it. And to your point, you're saying, well, no, you got to get it.
You got to get the behavior consistent or the activities consistent. Because otherwise, what are you measuring? We mentioned that earlier.
You mentioned a little bit in step two about root cause analysis. And I'm curious what role does data play in steps one and two before we even get to the evaluation part?
@12:41 - Cindy
Yeah. I mean, in step one and design, it's looking at what already exists out there. What has been proven?
@12:49 - Alexandra Mannerings (Merakinos)
I'm air quoting proven.
@12:51 - Cindy
There's also would be a whole side conversation around what's evidence. So there's what's proven. I also think why I say that.
Trian Golo program folks, research, and your participants. Those are all forms of evidence in my mind. So those are elements of data you need to hear from your people that you're serving because that's data and they know what's best for their lives.
So that's data in stage one. Step two is when we're we start to look at we've defined what the dosage is.
We start to capture data on are we delivering that program. So is anyone in your program getting the dosage that you think they're going to need?
We would call that fidelity fidelity to the model. Fidelity also. This is where I think people get picked up here is people can resist the idea that and next.
That dosage is right for everyone. And an exact dosage is not right for everyone. But you have a theory about what's going to work.
It's OK to flex. It might find, while you're looking at data and delivery and then later outcomes, for certain folks in your program, more of something might work better or less of something might work better.
There might be variation within that.
@14:31 - Alexandra Mannerings (Merakinos)
It's not rigid.
@14:32 - Cindy
But it's intentional. It's the guide to say, here's what we think is needed to get the outcome that we're looking for with some flex around that.
Because humans are humans. Humans can use an analogy with new ones that humans are not hamburgers. You can't go on the line at McDonald's and put the paddy down.
And put a slab of beef and put onions and some sauce and then get that.
@15:04 - Alexandra Mannerings (Merakinos)
We're not with it.
@15:08 - Cindy
So yes, variation within your model, your dosage, variation and fidelity is just fine. But it's that unintentional variation that we want to avoid.
@15:23 - Alexandra Mannerings (Merakinos)
I really resonated with me that idea of not rigid, but intentional. So you're not making willy-nilly off the cuff variation.
You're saying, if we think Advil works, then everyone needs to get Advil. Right? can't be giving some people gummy bears and some people, you know, had scalp massages.
Because now we're not actually working on the same thing.
@15:48 - Cindy
we have a theory that Advil is going to help the headache and we have a fine headache. Everyone should probably have scalp massages always.
That should always be a thing. Yeah, yes, in total agreement there.
@16:04 - Alexandra Mannerings (Merakinos)
And so to your point that you could maybe even have somewhat of a plan around your variation of what things do we think should be consistent like everyone should get at least one coaching session sort of regard to everyone should have one.
Now whether they have one or four, we're going to give you some flexibility, but maybe that's also a data point that you collect, which is who got one who got to who got three and you can use that to inform your theory of change to say like do certain people.
People better to certain numbers of coaching sessions, etc. But you want to have that intentional set of variation so that you are testing that intervention.
@16:40 - Cindy
Yeah, I mean we'd encourage some consistency first around what we think is going to work so yeah. Yeah, I X.
And then see, like, yeah, it's hard to deliver.
@16:55 - Alexandra Mannerings (Merakinos)
Can you deliver it? And how's it working?
@16:58 - Cindy
Like, yeah, feeling okay. To people and then are you seeing outcomes?
@17:04 - Alexandra Mannerings (Merakinos)
Yeah.
@17:04 - Cindy
Like you can have like like maybe back to our headache example. Maybe brain surgery is a great way to believe a headache.
I don't know. But that's not like that's not going to work for your people receiving the program. So you need an element of both quality and like is it working?
@17:27 - Alexandra Mannerings (Merakinos)
Is it is it acceptable? Is it?
@17:30 - Cindy
Are people okay with your program? Just more satisfaction. It's another way to say that. But then yes, like then when you're looking at outcomes, is it working for some and not others?
How do you slice and dice your data? you're asking how do you use data in the different phases and the next phase of looking at outcomes?
Is it working for some and not others? And then if so, root caught, like why? That's probably where you're going talking with to go your program.
I'm pretty sure. What's happening there? The quantitative data tells us what, what are we saying? We're seeing a variation that we don't understand.
But it's the qualitative talking to people that helps you understand why. And so both pieces are needed there in both phases.
@18:22 - Alexandra Mannerings (Merakinos)
wouldn't say.
@18:23 - Cindy
Yeah.
@18:25 - Alexandra Mannerings (Merakinos)
And you can also do that sort of quantitative qualitative feedback loop where you see a pattern, you see some variation, you do qualitative work to identify potential wise, get perspective, context, everything.
And then you can try and intervention based on that contextual information that you got from your qualitative and use your quantitative say, did we get it?
do we need to go back and refine this more? Right.
@18:50 - Cindy
What the audience won't be able to see when they listen to this podcast is Alexandra was just making a circle with her hand.
Exactly. You can't show the data, you look at it, you have some qualitative data, understand why you make that change, then you do that all again.
@19:10 - Alexandra Mannerings (Merakinos)
Right, and you make an intentional change. Intentional change. An intentional change that is hopefully consistently then implemented across your programming.
@19:19 - Cindy
Everyone agrees on the change and they all put it in place and they see what happens.
@19:22 - Alexandra Mannerings (Merakinos)
I wanted to go back to something that you said about your solution, not working for everyone because I think this is a place that when we get evidence and we're testing an idea, especially like in phases one, two and three.
We can get really stuck. You know, I heard this around alternative charter schools. And there was a group of new charter schools that were testing a model, a very specific model.
And we're tracking, they were sort of at to phase three, they said, here's a problem we've defined we want to work with students to be able to achieve these educational outcomes.
We think that this model of intervention. And then measured kids scores and attention and executive function to see if it was working.
And what people started pushing back with was like, well, but it doesn't work for this student or like this kind of student.
So therefore your intervention. And I was wondering if could talk a little bit about how we navigate that fact of like, you said we need to put a consistent enough intervention to have something to measure, but to your point, it may not work for, even if we are narrowly defining our target population.
There may be people like you said, if you have a brain tumor, Advil's not going to help, you need brain surgery.
So how do we put that in as part of our process and handling that one more testing interventions.
@20:45 - Cindy
Yeah, there's a lot in there. So I think there's some real questions of like, who was it not working for?
And who designed it and how was it designed from an inclusive data practice and an inclusive design standpoint? Since they don't know that specific context, it's kind of hard to answer that question.
some things that come to mind for me are like, if you look at data objectively and seeing, I don't know how it was defined that it's not working for certain students, maybe they weren't getting the educational progress that was anticipated within something's clearly wrong.
Something clearly isn't working for that group. And so I would say it's then that's what you just described before.
Like what is important to think about population. Often that's called target population.
@21:55 - Alexandra Mannerings (Merakinos)
We really need a better word than that.
@21:58 - Cindy
Sounds like you're taking aim. Well, the population that you're focused on, there may be some groups or elements of your education process aren't working for them, so you need to understand why.
So it's doing that cycle and understanding why. be working for some of the others. The other thing that comes to mind with that scenario you put forward is when we think about asking questions about what works and what doesn't.
When we see it's the programs we're looking for some and not others, it's important to think about the question that we're asking.
So we're phrasing it, why is the program not working for these students? We could phrase it as, why is the school not delivering a program that works for these students?
what is it that the school needs to change? to deliver well for these students. so just shifting it a little bit to say like what are we, what isn't the, what is not happening that might need to happen to see that what is the school not doing, what is the charter school not doing that they might need to just shift is shifted a little bit rather than saying there must be something with that group.
@23:35 - Alexandra Mannerings (Merakinos)
Right. really interesting point that even just how we ask the question from a linguistic point of view will ultimately impact the data that we think we need and the approach that we take to answering it.
So I think that's a really important point that we need to think, well, it might feel like nitpicking about semantics.
deeply, deeply meaningful of what does that reveal around And our assumptions of where the cause lies and where the potential solutions lie.
Because to your point, if we say, why does this program not work with certain students, that may lead us then, like you said, have an underlying assumption that it has something to do with those students.
Whereas it may be nothing that's under the students' control, it has to do like you said with the delivery of the program.
Now, one thing that I was also thinking, and I was pulling this example from several years ago, so I don't remember all of the details.
But I was also thinking about it from the point of view of my two kids who are wildly different from each other.
And I can almost guarantee that even if you implemented a program with incredibly high fidelity that was very functional for my son, it might not work for my daughter, just because of the difference.
if we say work, and love what you said about define what we mean by work and does not work.
But like, if we're looking at like my son trying to learn all of the things that are great appropriate, learn emotional control in elementary school, you know, and develop certain like math and literacy skills.
You know that you could end up with groups that sort of have mutually exclusive needs. Even though if we're definitely talking about, you know, elementary school students within the same like intelligence range, I mean from the same family even right like we're talking about a pretty tight knit group that we could get very specific and defining yet you still might end up where.
Even if like you said if the school were to adjust and try to provide something. For one of them that maybe it wouldn't work for the other.
And so I'm curious about that and maybe it's your point. It's still possible to meet both of their needs.
@25:50 - Cindy
don't know. That's where it came from a kind of curious point here of measuring and defining what works and how we track that.
Yeah. I'm being an education expert. That's true.
@26:04 - Alexandra Mannerings (Merakinos)
got stuck on that example.
@26:05 - Cindy
Sorry. I'm curious for me because I'm sure educators think about this all the time. But if we're using it as an example, yes.
@26:15 - Alexandra Mannerings (Merakinos)
Theoretical example.
@26:16 - Cindy
We're not trying to actually solve the education system. I think there's something about understanding the need of the individual that you're seeking to see change in.
Yeah. that's what any programs should be. might and then what, how in the bringing about that change or supporting them to make that change, it learning or some other behavior change.
What is going to help them? And that's going to look different. And we're trying to stay with groups, with different groups.
And so knowing both the, just knowing more about the population you're serving and hearing from them, I think can get you really far.
@27:21 - Alexandra Mannerings (Merakinos)
It's back to the human terms.
@27:22 - Cindy
I think you're not being able to articulate what it is working for. But I bet that I bet in a well experienced educator can ask good questions about what would work for her.
And same with an experienced program person. They'd be able to ask some questions about, well, if we're doing this and that doesn't seem to be working, let's get some input from our participant, maybe we could do something different.
So it's using your expertise to ask the right questions. And that's the other thing. When we think about data interpretation and data use is it's all contextual.
It's contextual based on who the data is coming from, you're serving. It's based like you need to bring your experience to the table to interpret what you're seeing.
Because we all bring different baggages to sense making of data. programs staff need to bring their expertise. That's why you also need to hear a person.
So your program participants to understand the meaning behind the data that you're seeing is before we bring it. I bet you if you pull in the perspective of your clients, they're going to share something with you that is new for you.
Because they come from a different perspective. So I kind of went on a tangent.
@28:55 - Alexandra Mannerings (Merakinos)
No, and well, this is what I think is so cool. was, again, challenging assumptions. I was defining a population based on characteristics of that population, like where they come from, you know, their family background, like I was sort of in my mind making groups of people by age, etc, etc.
But you turn that around in a way that I think is critical, which is really we should define the groups we want to serve by the particular need and maybe barrier.
Right. Because your intervention by definition is going to be addressing that need and overcoming that barrier. And so you want, I'm going put air quotes around the word homogenous, you want your group to be fairly similar in terms of the definition of it around that need and barrier question because if to mix our metaphor is going back to our headache problem, but you don't want to mix the brain tumor population with the headache levels.
Different interventions and you're just going to let one of them down. But if we're focused on the headache group, right, then we're going to be much better set up for success in creating a program that can be delivered consistently and see consistent results.
So that, I think, already turned around and challenged like the assumptions I was making right around how we're going to define these populations.
And I think that again, the expertise comes into that right having people themselves tell us what their needs and barriers are having the extra expertise of your program administrators, you know, to the point of elementary school students might tell us some things, but the educators are going to be able to tell us a lot more in other areas.
And then looking at the data and the research around identifying those particular needs and barriers and then having your program build that theory of change to address that need and barrier and then seeing did we do it.
@30:51 - Cindy
Yeah, yes, yes to all that. And it sounds like a lot, right. But people naturally do this in their
Daily work anyway. You're constantly asking a question and then seeking an answer. So let's slide some data into that.
@31:10 - Alexandra Mannerings (Merakinos)
As you're to formulate the question and then to answer the question. And I think it almost simplifies some things in that regard because if we could get it down to a singular need and barrier, those feel more solvable.
Right.
@31:30 - Cindy
trying. Yeah, I think that's important because you can't do it all. I do think, you know, some ethics come up in this process like, well, we see our program works for folks with headaches, but not folks with brain tumors.
Terrible.
@31:54 - Alexandra Mannerings (Merakinos)
Sorry. I went to write about that took a dark turn.
@31:57 - Cindy
I didn't mean to. And so we know it only works. For headache folks who are not going to serve folks with brain tumors, what is then the responsibility of the program?
Is it to develop a new program for the brain tumor folks or to find a partner that does that?
So I think that might be a misstere something people grapple with when when we find there's a suggestion that you may not be able to serve everyone.
So thinking about that and again, intentionally and and how how you might help that those folks with other needs is something to consider that you might be part of your program might be referrals or something different.
@32:45 - Alexandra Mannerings (Merakinos)
And I think it's really interesting that when we talk about ethics, know, again and again, people have sort of push back around data being somewhat heartless or cold or, you know, that it is dehumanizes right now.
Not just a data point, but when we think about that ethical responsibility, you may not even be aware of that need without the data that helps you understand your serving one part of your population, but not another.
@33:12 - Cindy
Yeah, exactly. And then is it really ethical to serve someone in your program when it's not going to help them?
@33:19 - Alexandra Mannerings (Merakinos)
Right.
@33:20 - Cindy
And then that, you know, those resources could have gone to someone else. Yep.
@33:24 - Alexandra Mannerings (Merakinos)
Or like you said, you could refer them to somebody who would be more likely to be able to help them.
So it's almost that we have a responsibility, a deep responsibility to understand very well who our programs work for.
@33:36 - Cindy
Yeah. Yes, we do. So it can help.
@33:42 - Alexandra Mannerings (Merakinos)
And data can help.
@33:43 - Cindy
Exactly.
@33:44 - Alexandra Mannerings (Merakinos)
can help. So if we've got a small nonprofit listening to this who's not in the long and going, yes, okay, I get this.
I need through all of my phases. need to understand, you know, how to achieve each of these, these different steps along the evidence continuum.
need to be. Making sure that I've got my programs in place that I've defined my populations and my outcome.
What would be your recommendation of the first step, that first thing we can do that is going to bring about some meaningful change?
They're going to see some results from taking that step forward in this space.
@34:22 - Cindy
Yeah, I mean, I would go small with data. I'm a strong advocate for less to more. We see a lot of, you know, people start capturing certain data because it was required by a grant.
They just keep using it and they don't know why. They just keep capturing it. They're not using it. They just keep collecting it.
They don't know why. you ask why are you collecting this, are like, no, no, we've always collected it. That's a resource stream.
So I would say, say, small and ask a specific question, like, what is challenging to me right now? Around your programming and then capture some data on that.
I don't think that's your data on everything because you won't be able to process that. So I think, you know, ask a targeted question, you know, maybe it's, you know, have a couple, you have coaching and you have training.
Maybe it's just getting, we see like from observing our program staff or observing that people aren't showing up to this training.
Capture some information on why. And is it true, like who's attending the training and who isn't, that's some quantitative data and then understand, capture the why.
So just small, like build small data muscle is, I think, a good focus. Don't boil the ocean.
@35:52 - Alexandra Mannerings (Merakinos)
I love that you added that people are showing up to our training and it's like, okay, hang on, are there like, what actually asked.
The question of how many people are showing up at our training, because I think that's a really easy one to miss, of just like assume you know what the problem is, you know what the question should be, and then you try to answer it without realizing that you're starting question.
@36:12 - Cindy
You didn't actually test that. Right.
@36:14 - Alexandra Mannerings (Merakinos)
Right. So that's a great point, which is test a question. If it is a real problem, then ask why.
@36:20 - Cindy
I think that's a great framework. Well, I'm in that question too. You could be asking quantitatively who is attending and who isn't, so there might be a difference.
Mm hmm. You're my happy. Yep. It may to tell you.
@36:35 - Alexandra Mannerings (Merakinos)
Exactly. Exactly. I love this. Well, thank you so, so much for your time today.
@36:42 - Cindy
If folks want to learn more about you about results lab, where can they go? Our website is www.resoldslab.com. think you'll have that in the show notes.
We'll have some other materials links in the show notes, but. On our website, there's a, I think we've got a button that says Let's Chat.
You can reach out to anyone on our team. But that's a quick way to do it is go to the website, hit the Let's Chat, you'll get connected with somebody.
@37:16 - Alexandra Mannerings (Merakinos)
Excellent.
@37:18 - Cindy
We're always ready to chat with folks.
@37:21 - Alexandra Mannerings (Merakinos)
Excellent. Well, thank you so much.
@37:24 - Cindy
Thank you Alexandra. Appreciate it.