84. Barriers to Active Learning

Despite research demonstrating the efficacy of active learning approaches, observations of classroom instruction show limited use. In this episode, Lindsay Wheeler and Hannah Sturtevant join us to explore potential interventions to overcome the barriers to the adoption of effective teaching practices.

Lindsay is the Assistant Director of STEM education initiatives at the UVA Center for Teaching Excellence and an assistant professor. Lindsay’s background is in chemistry and she has a PhD in science education. Hannah’s a postdoctoral research associate at the center. Her PhD is in chemistry with an emphasis on chemical education.

Show Notes

  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Special Topics-Physics Education Research, 3(2), 020102.
  • University of Virginia programs
  • Teach Better Podcast Episode 80
  • Smith, M. K., Jones, F. H., Gilbert, S. L., & Wieman, C. E. (2013). The Classroom Observation Protocol for Undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. CBE Life Sciences Education, 12(4), 618–627.
  • POGIL- Process Oriented Guided Inquiry Learning
  • PODLive! Webinar
  • Meghan Bathgate — Postdoctoral associate at Yale University
  • Emily Walter — Assistant professor of Biology at California State University, Fresno

Transcript

Rebecca: Despite research demonstrating the efficacy of active learning approaches, observations of classroom instruction show limited use. In this episode, we explore potential interventions to overcome the barriers to the adoption of effective teaching practices.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

Rebecca: Our guests today are Doctors Lindsay Wheeler and Hannah Sturtevant. Lindsay is the Assistant Director of STEM Education Initiatives at the UVA Center for Teaching Excellence and an assistant professor. Lindsay’s background is in chemistry and she has a PhD in science education. Hannah’s a postdoctoral research associate at the center. Her PhD is in chemistry with an emphasis in chemical education. Welcome, Lindsay and Hannah.

John: Welcome.

Hannah: Thank you.

Lindsay: Thank you.

John:   Our teas today are…

Hannah:  I have a lemon filled Earl Grey tea. [LAUGHTER]

Lindsay: I have my water.

Rebecca: I’m drinking English Afternoon.

John: And I have Blueberry Green tea. We’ve invited you here to discuss the study you’ve done on why STEM faculty are reluctant to try new teaching techniques. What prompted the study?

Lindsay: One of the big things that we try to focus on in our center is how we use local data to drive faculty development to help improve teaching and learning on our campus. As part of that back in 2016, 2017, we did a large-scale observation project where we observed over 200 STEM undergraduate courses. And we wanted to look for differences in the different instructional practices that our faculty were using based on whether they were engaging in our center or whether they were not. And this was sort of the beginning piece of driving everything that we’ve done since then, because we did see differences in their instructional practices between faculty who have and have not engaged in our center, but we didn’t see as much as we thought we would. And so we really wanted to further explore that and understand what things were hindering faculty from doing what they wanted, using evidence-based practices, particularly those that had gone through our Course Design Institute and had done other programs with us. And these are things that we heard anecdotally but we really wanted to better systematically measure this. That’s where Hannah comes in as a postdoctoral research associate and I’ll let her talk about what we did to further explore this idea of what the barriers were.

Hannah: So I came into the project when Lindsay was wanting to develop this barrier survey of some kind. And so I started by going through the literature and I found a lot of work that was of a qualitative nature that people had done in various fields, looking at barriers to implementing evidence-based practices and research-based practices. A lot of different terms are used so you have to know which ones to search depending on which field… in which journal… you’re in, so I got introduced to that, which was a bit of a challenge, but was able to kind of sort out all these different areas and find work that had been done both in DBER in specific fields and then more of the faculty development field. So I pulled on all of those different sources, but I did not find any survey instrument that was of a quantitative nature that delineated all of these different barriers that had been found in the qualitative papers. I found a couple surveys that had little sections of barriers and then I found a survey that looked at institutional climate, but I didn’t find any that delineated lots and lots of barriers that I’d seen in the qualitative work. So I drew on all that qualitative work to develop a survey instrument that we then piloted, so that’s kind of where all that came from.

Lindsay: And to add into that, there are benefits for doing interviews and qualitative work, but we wanted to really be able to find a way to quickly but systematically capture these barriers. Because as I mentioned, we are really interested in using that locally driven data and there’s only so many people in our center that can be able to do that work. That was part of the driving force behind developing the survey itself.

John: Just backing up a step, Hannah mentioned DBER. For our listeners, could you define that just so there is some clarity there?

Hannah: Yes, DBER is Discipline-Based Education Research. So I am a chemical educator, I’m a DBER researcher. Biology educators, astronomy educators, those are all DBER researchers.

John: What did you find in the survey?

Hannah: That survey instrument was not just barriers, but also some related ideas, so it included a section on teaching-research identity because that was something that came out of looking at the literature and seeing that this tension between teacher-researcher identity seemed to be something that might be a part of the barriers. So we added a section on that because we’d also not found any survey instruments that delineated those in a quantitative way. So moving on to the study. We piloted with 86 and that was a subset of the 150 instructors that were observed in the study that Lindsay mentioned earlier, that was kind of a rationale for the current research. So we were able to get 86 complete datasets out of that from the 150 that we sent it to. So first of all, we had 46 Likert scale questions— different statements about barriers that faculty participants could rate on a scale of one-to-five of, “This is not at all a barrier for me,” to “This is a barrier for me all the time.” And when we looked at the results of all of those Likert questions, the top five were number one—and that’s 65 percent said this was at least a moderate barrier for them so they rated that at three-out-of five at least—was lack of time. The second was tenure and promotion guidelines. The third was fixed seats or infrastructure constraints at 61 percent of faculty mentioning that. Number four was that students don’t come prepared at 59 percent. And then five was that too much prep time in particular was required to implement these evidence-based practices, that was at 50 percent of people mentioning. And we investigated those also qualitatively and the qualitative question that we asked—the open-ended question that we asked—was simply, “What barriers are most significant to you in your own teaching and why?” That question was a bit different. So we had all 46 of those Likert scale statements that faculty rated, but this one was getting at, “Okay, so now thinking about your work, what is the most significant barrier for you?” so it was a slightly different question than what we asked to the quantitative, and it produced some very interesting results. So what these 86 respondents said is, number one, aligned with the quantitative at lack of time, but that was only 57 percent that were saying that. Second was classroom space and lack of needed technology at 22 percent. Third was the lack of institutional support, so there’s a lot wrapped up into that question. And then number four was a variety of student-related issues and student resistance and not doing what they’re needing to do at 12 percent. And then finally, the lack of TA support and classes being too large coming in at nine and eight percent. So that gave us a greater understanding of what’s the number one issue for our particular faculty, as well as the overall landscape of all of these different Likert scale barriers. So that was interesting and drove what we were doing in our research. So one of the other results that came out of this work had to do with satisfaction and dissatisfaction with evidence-based practices. We asked the faculty who responded to the survey to go through a list of evidence-based practices and say which ones that they used. And looking at one of those practices—for instance—collaborative learning, we asked them if they were satisfied or dissatisfied with that practice—or both—and that was the practice that people were most dissatisfied with. And when we looked at that, and we compared it with their barriers results, we found descriptively that those faculty had higher barriers across all of the different barrier groupings on the survey. The ones that were dissatisfied with collaborative learning had higher barriers across all the different barrier groupings and we ended up grouping those into five. They had higher barriers across the board and we had been investigating, “What does that mean?” and as we’ve been expanding the study, wanting to get more data to really understand that and look into the policy responses on why they’re dissatisfied… things like that. But what came out of that was what Lindsay referred to, was the need to support faculty, not just before they implement an evidence-based practice, but when they’re implementing it. And we found this excellent study from Henderson and Dancy back in 2007. They did a qualitative study of physics faculty looking at supporting them and what they found is for those faculty that weren’t supported, once they came across these, what they called “situation barriers,” when they were implementing a practice, that made them stop using the practice. And so we think that our results really back up what Henderson and Dancy found and the need to support faculty once they start using a practice, helping them understand what barriers are going to be when they implement that practice and then supporting them throughout the time that they’re implementing. Because otherwise, if they’re not aware of the barriers that they’re going to face, then they may stop using that practice altogether. So that was one of the tentative results that came out of this pilot study was showing us… demonstrating the need to support those faculty.

Rebecca: I was also going to say that a lot of times faculty don’t give themselves a break. The first time you do something, you’re not perfect at it, just like our students, they’re not perfect at it the first time. You have to practice and do it over and over again to get good at it. So I think reminding faculty when they’re doing something new that will also happen for them, doesn’t hurt. [LAUGHTER]

Hannah: Exactly. There was a study that came out recently, it was over five years of implementation. And the first year went horribly, and they adjusted. It wasn’t until like the third implementation that things started to go much better, student resistance started to go down, and just recognizing the first time you implement, there will be a lot of barriers… there will be a lot of problems and that’s okay… to keep going, that this is a normal thing.

Lindsay: I think that’s part of, really, the importance of this. Other people are struggling too. Helping to normalize the fact that when you try something new in the classroom, and it doesn’t go well, it’s par for the course and that other faculty are going through that as well.

Rebecca: Those are some interesting results, but not entirely surprising. I think those are some similar things that we’ve heard and seen in other research. But interesting that it’s at your specific institution from your specific faculty, and that the qualitative and quantitative pieces somewhat align. So what have you been doing with that data?

Lindsay: We have a few different programs that we are working on refining, aligning, expanding to what we’ve found systematically in these surveys with our faculty. Some of these include our Ignite program. Our Ignite program is something that we’ve been running with new faculty for the last few years. This is a program meant to support faculty as they implement a newly redesigned course. So these new faculty go through a week-long Course Design Institute with us and then they spend the next semester whenever they implement their new course, either in Fall or Spring, they meet biweekly with one of our faculty developers, and anywhere from five to 10 other new faculty in a learning community and they build on some of the things that they’ve been learning about course design and implementation. So they’re really getting that support throughout the semester. And one of the things that came out of our barriers survey was that the other work that we’ve been doing—particularly around these observations—is that the implementation is really important and that we really need to support faculty through that. We have some studies that, particularly around Ignite and new faculty, that demonstrate how important this learning community is, not just for the implementation, but the success of students. And so now we are expanding our Ignite program to all faculty, not just new faculty at our institution. We’re doing that for the first time this Fall semester. So that’s one of the programs that we have refined based on some of the data that we’ve been finding.

John: I think one of the benefits of that is if one of the barriers is departmental culture, that prevents people from trying new techniques, bringing in more senior faculty might break that down.

Lindsay: Yes, and one of the places that we’re beginning to expand to as well are learning communities, particularly for mid-career faculty. Many of our Ignite faculty are now moving into being tenured and so they are now becoming leaders in their departments and how do we foster and continue to help support them around teaching and learning?

Rebecca: Does your Ignite program come with course releases or does it come with time?

Lindsay: That’s a good question. We do not have course release at our institution, but they do receive a $1500 professional development fund, which helps support them in being able to continue to develop, they may be able to go to conferences, they do get supported in that way. Another one of our programs that we are developing and as Hannah mentioned one of the barriers are around class size and TAs. And so we have developed over the last few years a program called Spark. Spark is intended to be a program to support teaching assistants in the STEM departments. And over the last three years, we have had over 250 TAs enroll in our one-credit teaching methods course where they actually learn about different pedagogical techniques, learning theory, and they’re able to apply that every week as they are TAs in lab courses, discussion sessions, and even in co-instructor type roles. And that has been a really important piece to help support transformation in the STEM departments because our TAs are really the primary point person in many of our first- year courses and so providing them the support has been really transformative. One of the third things that we are doing in the center is around curriculum redesign. So one of the things that we found in the study that I think you alluded to was the differences between departments and the importance of the departmental culture and departmental support in helping faculty be able to utilize and implement evidence-based practices. And so we are actually working with departments to think about not just individual courses, but what is the curriculum look like for an actual major? What do we want our students to be able to know, value, and do at the end of four years—or five years—within different departments? And so we’re really working to develop this. This is something that we’re doing this year and really working to refine our programming around curriculum development and redesign.

Rebecca: One of the themes of all three programs is curriculum development. What are some things specifically that you’ve implemented or changed consistently to help with some of the issues that you’ve identified?

Lindsay: As part of the redesign process, we don’t necessarily recommend a single type of redesign or curriculum. We really strive to use evidence-based practices, whether that’s at the course level or curriculum level, to allow faculty to think about what best aligns with what they want students to be getting out of the course, or the curriculum. For example, if one of their learning objectives has to do with being able to collaborate and communicate, we might recommend some sort of collaborative learning design as implemented in their course. If they’re more interested in students engaging with the community, that might look a little different in terms of the actual design of the course. So I don’t know if that answers your question, but we don’t necessarily recommend one particular approach.

Rebecca: If faculty are resistant to evidence-based practices, and you were already introducing faculty to evidence-based practices in these programs, is there a different way that you’re presenting this information now to faculty to get them to buy in more to these practices, especially considering time concerns and student resistance and that kind of thing?

Lindsay: Interestingly enough, there are a handful of faculty that I think are resistant to the idea of active learning. The way that we’ve set up at least our Course Design Institute is in such a way that we attend to motivation first, and so we really get very little resistance to the idea of active learning or evidence-based practices. They want to do it. Some of them do do it. They either feel like they can’t do it as much as they want to or they do it and they’re not satisfied with their practice. We really don’t run up into the barrier of, “I don’t believe in active learning,” with the exception of a handful of faculty.

John: And there’s probably not much you can do with those. But I would think working with entire departments might help reduce some of the resistance because when you have that sort of collaboration with the department, it becomes part of the department culture, I would think. How has that been working?

Lindsay: We have had a cohort of faculty within a department go through our Course Design Institute and then another program that paralleled Ignite that was specifically for STEM faculty. And this department really has transformed, so this is about five years ago that they went through as a cohort. The department itself, the culture there is focused on teaching and learning, they continue to engage with our center, we have a recently started SoTL Scholars Program, so Scholarship of Teaching and Learning. We had five of the faculty from that department actually go through this together this past year. They’ve started their own reading group. We’ve been looking at data from the department and we see that student failure rates are going down in their department, particularly for underrepresented students. So working with the departments I think are really, really important and we’re seeing the fruits of that.

John: Earlier Hannah mentioned something about looking at issues of identity in terms of teachers and scholars and so forth. And I would think that perhaps the work you’re doing with SoTL might help unify that. Could you tell us a little bit more about the results you found and how you’ve been addressing those?

Hannah: What we expected to find was that there would be a correlation between teaching and research identities and that if you were high in teaching, you might be lower in research. If you were high in research, you might be lower in teaching. And what we found was that there was no correlation, that you could have both. You could be both an excellent teacher and researcher, you can be really strong in both of those identities, or you could not be. It was all over the place. And part of that is the sample size, and we have since expanded and haven’t analyzed that data yet, but we’re looking into that more.

Lindsay: And to add on to that, so the way that we looked at identity was the idea of how connected you feel with that particular profession. So if you feel connected to the teaching community versus feeling connected to the research community. And we also had a third aspect to that, which was the work identity… so how connected do they feel to the university? What we found was that faculty who had a strong work identity—meaning that they felt connected to the institution—they felt that the department was less of a barrier for implementing evidence-based practices, and they didn’t perceive that they had barriers related to supports. So things like having TAs, classroom space, and things like that.

John: Going back a little bit, you mentioned that one of the barriers that some faculty mentioned was the size of their classes. How have you helped faculty get past that?

Lindsay: We’ve actually had conflicting results around that. So faculty perceived class size as being a barrier to implementing evidence-based practices. But when we look at the actual observations of those faculty teaching, we see that faculty who have engaged in our center use more evidence-based practices, even when controlling for class size. And so what we need to further investigate is how our center plays a role in reducing barriers for faculty. The sample size that we have with our survey results is much smaller, and we can’t really disaggregate. There is something interesting that has to do with class size, and we’re not exactly sure what it is. Whether it’s a perceived barrier or an actual barrier, we’re not quite sure. But I might guess it’s a perceived barrier because we do see more active learning even when classes are large. So faculty are able to do these things, but sometimes they may not think they can.

Rebecca: Or they might not know what practices work at a large scale, because there’s different ways to implement… and so the more we expose them…

Hannah: Exactly. Yeah, because they’re trying to use approaches that require a studio. You can’t do that with a 500-student lecture. So obviously, that particular evidence-based practice is not going to be useful in that case. You can bring in some of these perhaps smaller practices but that are still powerful to get students actively working and collaborating with one another. Think-pair-shares, things like that, that you can still do and then there’s all sorts of work—great work—that’s going on now talking about what you can do with large classes.

Lindsay: And those are the things that we talk about in our Course Design Institute. How do you design your course, knowing that you have particular limitations because of things like class size? Or maybe it’s a required entry-level course, or maybe it’s an upper-level course, or a graduate course. All of those things are really important in thinking about the design.

Rebecca: Or the chairs don’t move.

Lindsay: But we do talk with them about how to deal with that. So in the lecture hall that the seats are fixed and you want to do group work, we have recommended to faculty—if they have space—leave every third row empty, and that way you can actually access students and students can turn backwards to work with people behind that. So we definitely try to help them think about ways to go beyond what they think are perceived barriers.

Rebecca: How to hack your classroom 101.

Lindsay: How to hack your classroom, I like it.

John: And actually, let me put a plug in for one of the Teach Better podcast episodes, which came out in April on the importance of classroom design. We’ll include a link to that in the show notes. The research they were citing finds that active learning helps, but classroom design helps even controlling for the use of active learning. So some of that flexibility is useful. This has been implemented in STEM fields—I think many of those topics that you found would work in other disciplines. Has the teaching center more broadly started to roll out some of these techniques throughout the institution?

Lindsay: I’m going to answer this from a much more broad perspective, thinking about what we’re doing in terms of our programming and supporting faculty. And I think Hannah can talk about the more specific piece around what we’re doing to better gather data around faculty barriers beyond our STEM faculty. So one of the biggest things that I think I mentioned at the beginning that we really are striving to do is use our own local data in addition to the literature to really drive what we do. For us, this goes beyond just doing a needs assessment. This is really doing research around teaching and learning at our institution. One of the pieces of evidence that we found around our prior work is the ways that students engage with each other in class, and how the instructor sets up that group work in class is really important to student success. And so what we are now doing this past year, we are collecting data to better understand not just faculty perceptions of how they design group work, how they assign students to groups, what do they do to assess group work, but we’re also looking at the student perspective. We are actually following students that are working in groups over time, having them reflect on their practice, share audio files and share working documents, to better understand what’s going on in group work. All of that data now we’re using to develop a advanced collaborative institute for faculty that’s going to use not just the literature that’s already published around group work, but also locally derived data that’s both STEM and non-STEM faculty in classrooms. And it’s been interesting because we think about our disciplines being very distinct in terms of  “Oh well,  STEM classrooms are very special, and they need to do these particular things.” As we’ve interviewed faculty, the reasons why they use group work—regardless of their discipline—is very similar. They want students to develop professional skills. I think it’s really important to gather that data to understand this perspective so that when we develop these programs and supports for faculty, we can actually talk about what the faculty are saying and how we use that to improve. So that’s just one example of how we’re broadening this idea of data-driven faculty professional development.

Rebecca: How are you gathering that data about group work?

Lindsay: In our center, I am 50 percent research and assessment and so a lot of my work is around being able to assess our programs, but also be able to gather the data to drive programming. As we said in the beginning, my PhD is in science education. So this is my formal training, being able to do this type of work. So I actually have a group of three graduate students—as well as Hannah and another postdoc—that helps support the research and assessment and center. So for example, as part of that group work study, I had one graduate student who over the course of two weeks, interviewed 19 faculty and over 1000 minutes of interviews that had to be transcribed. I really have a committed group of graduate students and postdocs that help support this work, because they’re really interested in helping make the improvements as well. I don’t think if this was something that was very abstract and not related to helping improve instruction that we would have such buy-in from the people that are helping support this work. So we’re doing interviews with faculty, students are submitting reflections, audio files and documents. So those are the data sources we have right now. We also have syllabi and course documents that the faculty have developed that articulate how they are setting up these group work or group projects.

John: That’s a great resource, I think, for all teaching centers because most of us don’t do that, and it’s nice to see this sort of research. We often talk to faculty about the importance of doing SoTL research, how the classes are working, but teaching centers don’t always do quite as much assessment of how their programs work, and how things are working on their own campuses in this way. So it’s a nice example, I think.

Hannah: Right, and I can talk to the real specific research that we’re trying to do to expand from STEM into non-STEM fields to kind of get more of that research across the university going. So the survey that I developed that has the barriers, that has the identity, that has some qualitative background questions to try to understand where their beliefs come from, all of that. I have been working with STEM faculty and non-STEM faculty now, to expand into the humanities, the arts, the social sciences. And what we’ve been doing is working with humanities faculty at the Center and then I had a focus group this week with several scholars in those areas to talk about the language that we use in the survey. So what I quickly found when we were trying to expand the survey across the university, is that the language that you use is really important. Now STEM faculty, they are fine with the use of the term evidence-based practices. And discipline-based faculty and researchers, we want to see the evidence. We want to know if something works, we want to know that there was a rigorous study that backs up that particular practice, and once we see that, we’re ready to kind of go for it. But when you try to expand that wording into the humanities, that’s not so much a crucial thing for them, they’re wanting to see that things work. The type of research that they do is very different and when we use the term evidence-based practices, the way that they think about that is very different from STEM faculty. So we had to change the wording, we’re modifying the survey, how the questions are asked, the types of words that we use, the assumptions that we’re making. So that’s been my job the past few weeks and will continue because it’s been proven it can be quite challenging to make sure that we’re not alienating a lot of the people that are taking the survey to the point where they see certain words and are like, “This doesn’t apply to me, I don’t want to take the survey anymore.” So that’s been the challenge with this, expanding this from STEM, is the language can be a barrier to people taking the survey and then we don’t get the data that we need. I’ve been working to figure out, “How do we talk about this in a way that we can compare across all of these groups, but still get useful data and not alienate groups within those different departments.”

Rebecca: I think sharing a summary of that information would actually be useful for a lot of centers and researchers too because teaching and learning centers probably also suffer from their advertisements and stuff, perhaps alienating groups of people and not realizing it for the same reason, potentially.

Hannah: Definitely, definitely. And one of the humanities faculty members here at the center and I have been talking about that and may be coming out with a paper once we gather more data on this, on the language that we use. What is useful and what is not useful by discipline?

John: That’s something I wouldn’t have thought of, because we use a lot of evidence-based practices here all the time.

Hannah: Yeah, I didn’t think of it either, and so I was in for quite the shock when I started talking with humanities faculty.

Lindsay: And I think another thing to add in terms of how we’re broadening this work, one of the places that I’ve begun to explore is how do we set up infrastructure at our institution so that we can actually systematically gather data, connect data sources, and then help faculty use that individually to improve instruction. It doesn’t do anybody any good if we gather evidence or research—we do research on our own—and then we don’t do anything with it. And so, we’re developing as I mentioned, our SoTL Scholars program so we can help faculty learn how to do this research on their own. So we are developing a set of tools that we can use to, for example, go out and observe faculty teaching in their classrooms and then from that data, create some sort of visualization that can be used in a consultation. We have a consultation program—many institutions in our centers have consultation programs—but what we really want to begin to do is gather that data in a way that we can begin to represent it on some sort of timeline, where the faculty can see, “Okay, the first 15 minutes of class we did lecture, I asked a few questions here and there, students didn’t answer those questions,” or, “I answered them myself or I moved on too quickly,” and so really honing in on some of those small details that can really help them make tweaks and improvements to their own instruction. So we’re really working at that infrastructural-level now to think about how do we create these tools and set up databases so that we can gather data and share that with faculty.

Rebecca: A follow up question to the qualitative research that you did at the very beginning… What kind of observations you were making for that qualitative research and what you were focused on? What you were looking for specifically…

Lindsay: Good questions. So the original observation study that we did a few years ago, we ended up using COPUS, the Classroom Observation Protocol for Undergraduate STEM. If you’re not familiar with that, COPUS measures the presence or absence of various different types of student and instructor behaviors over two-minute time increments. I was able to train 35 undergrads on how to use COPUS reliably and we were able to gather… for each individual course we observed twice. And we were able to then calculate the percent of time the instructor spent lecturing, or spent doing quicker questions, group work, administrative tasks. And we were recently co-authors on a science publication where the COPUS data were then transformed into profiles and so we were able to then categorize these different classes as primarily lecture—which was greater than 80-percent lecture using COPUS—interactive lecture—which was lecture but it had some clicker questions or some other group work interspersed throughout— and then the third set of categories was around student-centered instruction, so it could be POGIL type classes—so Process Oriented Guided Inquiry Learning type classes—or primarily group work, working on worksheets, doing problem solving, or a variety of different group activities. And so of those, we had 239 classes that we observed. Of those, we were able to classify those classes into those three categories—lecture, interactive lecture, and student-centered—and then we took those classes and organized them based on the intervention that the faculty have gone through. So whether they’ve engaged in our Course Design Institute, whether they’ve done our Ignite program, and we actually had a fair amount of faculty that we observed that have never engaged in our center at all. And so that’s where we were beginning to see differences… that our Ignite faculty, we saw much more student-centered instruction than faculty who had never engaged our center. We also gathered grade data on those classes. Do you want to know about that?

Rebecca: Yeah.

John: Sure.

Lindsay: This is actually a paper that’s currently in review, but the grade data was the thing that was really interesting to me. What we ended up doing is we calculated a DFW rate. That’s D, F, and withdrawal. So basically, failure rate for students in those classes that we observed, those 239 classes. We also were able to calculate failure rates for underrepresented minority students. So those were black, African-American students, and Hispanic students combined together compared to white students in the class. And even when we gathered observations of 239 courses, when you started to look at the courses taught by faculty at the different types of interventions—so that was Ignite, Course Design Institute—and then when you broke it down even further by, “Let’s look at those courses taught by Ignite faculty that did active learning, or lecture, or interactive lecture,” the numbers got very small very quickly. But one of the most interesting pieces that we found descriptively was, when you looked at just courses that were categorized as having student-centered instruction—so active learning, group work, those types of things—the faculty that have gone through things like our Ignite program, and another program called Nucleus—which is similar for STEM faculty—the failure rates between white students and underrepresented students were nonexistent. When you looked at student-centered courses where the instructors had not gone through our Course Design Institute or gone through any of our communities, the failure rates for underrepresented minority students were four times that of white students. Now this is descriptive, this is not anything that’s inferential, but that was one of the driving forces for me that made me realize that we need to look more at group work and what was going on in group work because it’s suggested that when you implement group work or student-centered instruction in your courses and you’re not supported in doing so, you are doing a disservice to your students, and that seems to differentially impact underrepresented students more so than white students. And that was really disturbing to me that we saw those differences on average. This was not the max, this was a mean value. And so that was so important for us to further explore, and we would not have known not had we not done such a large-scale study, and had we not used our own data.

Rebecca: That’s really interesting.

Lindsay: Thank you.

John: You’re making a big difference there, clearly.

Lindsay: We are, and it’s so exciting.

Rebecca: Yeah, I think sometimes we don’t always realize those other kinds of impacts. Or that there could be a difference in the kind of impact that one makes. So I think that’s a really interesting initial discovery to explore, so I’m really interested to see what else you find out.

Lindsay: So we wouldn’t have been able to make those findings had we not been able to connect to institutional data, and so that’s another reason why this developing infrastructure is so, so important, that we’re not going to be able to find meaning if we’re not connecting all of the pieces.

Rebecca: I think one of the things that is really interesting is that you’ve been able to do such robust research at your own institution and have the support to do that. Even how you structured that and how you’ve gathered that would be of interest to many other centers, I think. Sometimes the details of how you arrange that and organize it and how one thing led to another can help other organizations do something similar.

Lindsay: Thank you. I will put a plug in. So in terms of helping other centers be able to do this type of systematic research assessment work, we had a PODLive! webinar on Friday, April 26. If you’re a POD member, you should be able to access this through their website to see what we talked about and what questions we ask ourselves as we go through the process of thinking through measuring impact.

Rebecca: Great. We will make sure we link to that in the show notes and let people know how to access that.

John: We always end with the question, what are you doing next? You’ve already described some things, but we’ll still ask anyway.

Lindsay: So if you can’t tell already, I’m really passionate about data… using data to help drive what we do to improve teaching and learning. And so the two sort of big things that are next for me are really trying to build the infrastructure so that we can liberate data and be able to use data meaningfully, respectfully, and purposefully to help improve instruction. And also being able to help empower our faculty to be able to do research on teaching and learning in their classrooms… so trying to expand our SoTL Scholars Program, and developing further supports in that area. So that’s what’s next for me.

Hannah: And for me, I am working on a couple projects related to the barriers work. So we talked earlier about the humanities expansion, so developing a survey instrument that can be given across departments. So I’m continuing to work on that, work on the language that we’re using, making it relevant to them. And then we’ve got a national study that we’re trying to work on. So we have implemented the pilot—which is what we talked about today, the results of that—and then we implemented a second one also at UVA, but much larger. And then we’re wanting to now expand this and do a national study because the real beauty of this instrument is that it’s not just for us at UVA, it is meant to be a tool for any university, any department to be able to use. And one of the findings that came out of our study was that the barriers are different by department. The barriers, the use of evidence-based practices differs by department… it’s not just the university being different from another university. It’s the department being different from another department at a different university. And so this tool allows any department, any university, to give this to their faculty and see contextually, what are the barriers for these faculty? Now you look across the board, time is usually the highest barrier, but what comes after that differs by department. If there’s particular issues with one department, one university with the teaching-research balance at a given university, all of that’s going to be different. And so the beauty of this instrument is let’s look at a variety of types of universities, types of departments, let’s try to understand what is useful, what are supports, what are barriers across different institutions, across different departments. Try to look for where are there trends and where are there not trends. Where is it just entirely dependent on a given context and where do we see maybe some trends in tenure-track faculty versus non-tenure-track faculty, general faculty, things like that. So we’re really hoping to dig into a much larger sample in the coming year and investigate this further, and I will say that there are a couple of other researchers who are also working on this. So this is an up-and-coming area of research that you’ve got Megan Bathgate at Yale, you’ve got Emily Walter at Cal State Fresno, they’re both doing studies along this idea of barriers and supports for faculty using evidence-based practices. So, I just wanted to put a plug in that we’re not the only researchers doing this. There’s a lot of great work that’s going on and I think this is an up-and-coming area to really help support moving higher education forward and transforming higher education, ultimately, by understanding how can we help our faculty implement more of these practices that we know are going to support our students better?

Rebecca: Great, sounds like a lot of exciting things coming down the road for us to take in soon.

Hannah: Definitely.

Lindsay: Yes.

John: Thank you for joining us. This was a really interesting discussion, and I think many of us will reflect on it in our teaching centers.

Rebecca: Yeah, thank you so much.

Lindsay: Well, thank you, appreciate it.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John: Editing assistance provided by Kim Fisher, Chris Wallace, Kelly Knight, Joseph Bandru, Jacob Alverson, Brittany Jones, and Gabriella Perez.