Students from marginalized groups often question whether or not they should be in our classes and disciplines. In this episode, Michal Kurlaender joins us to discuss an easy to implement intervention that faculty can use to improve retention and student success. Michal is a Chancellor’s Leadership Professor in the School of Education at UC Davis and is a co-Director of the California Education Lab. She is a co-author with Scott Carrell of a National Bureau of Economic Research working paper entitled “My Professor Cares: Experimental Evidence on the Role of Faculty Engagement.” (This article is forthcoming in the American Economic Association journal, Economic Policy.)
Show Notes
- Carrell, S. E., & Kurlaender, M. (2020). My Professor Cares: Experimental evidence on the role of faculty engagement. (No. w27312). National Bureau of Economic Research.
- Carrell, S. E., & Kurlaender, M. (2023, forthcoming). My Professor Cares: Experimental evidence on the role of faculty engagement. Economic Policy
- Lumen Learning Waymaker packages
- Peter Arcidiacono (2020). “Differential Grading Policies.” Tea for Teaching podcast. Episode 122. February 26.
Transcript
John: Students from marginalized groups often question whether or not they should be in our classes and disciplines. In this episode, we discuss an easy to implement intervention that faculty can use to improve retention and student success.
[MUSIC]
John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.
Rebecca: This podcast series is hosted by John Kane, an economist…
John: …and Rebecca Mushtare, a graphic designer…
Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.
[MUSIC]
Rebecca: Our guest today is Michal Kurlaender. Michal is a Chancellor’s Leadership Professor in the School of Education at UC Davis and is a co-Director of the California Education Lab. She is a co-author with Scott Carrell of a National Bureau of Economic Research working paper entitled “My Professor Cares: Experimental Evidence on the Role of Faculty Engagement.” This article is forthcoming in the American Economic Association Journal, Economic Policy. Welcome, Michal.
Michal: Thank you for having me.
John: Our teas today are:… Michal, are you drinking any tea?
Michal: I am not drinking tea, but I did have some not too long ago today.
Rebecca: Do you have a favorite?
Michal: I’ve come back to Earl Grey. I used to be an Earl Grey person. I left it for a while, and it’s just made a comeback for me.
Rebecca: Nice. It’s a classic. I have Christmas tea today, despite the fact that it’s June.
John: And I’m drinking a ginger peach black tea today.
Rebecca: I have my Christmas tea because we had our presidential announcement today. And it was like celebration tea.
John: White smoke has come out of the towers [LAUGHTER] and we have a new college president here.
Michal: Congratulations.
Rebecca: So we invited you here today to discuss the study of the impact of specific faculty behaviors on historically underrepresented minority student success. How did you decide on this specific intervention?
Michal: My colleague and collaborator Scott Carrell and I do a lot of work to try to understand College Access and Success. And in particular, we’re interested in understanding inequalities in graduation rates at more open access institutions, like the California State University system, which has, across its system, some more selective campuses and some more open access institutions. But in particular, what we’ve noticed for years is that the graduation rates of students of color, particularly male students of color, black and LatinX men, were really much lower than other groups. And this was a puzzle to us, largely because the eligibility to get into four-year colleges, including the CSUs is quite substantial. These are primarily B-plus students who have finished a comprehensive set of courses required to be eligible for the CSU. And so to see their graduation rates lag so much behind other students was really troubling to us. And so that’s why we decided to focus particularly on the CSU system. And we focused on one campus in particular, that’s a less selective CSU campus.
John: What was the intervention that you used?
Michal: We didn’t go in knowing what intervention to use, we actually started with a focus group with particularly men of color at this campus and asked them what their challenges were. In particular, what we learned was that their challenges were not necessarily social or more broadly campus level, they were primarily in the classroom, and that is they felt disconnected from their instructors and from what to do to be successful. These were all students who reported feeling quite successful in their high school, feeling quite connected to their high school instructors who encouraged them to go on to college. Many of them were in competitive fields like STEM and engineering. And then they felt like they really struggled in college, and in particular, how to seek help and how to understand what instructors wanted from them. And so we came in quite agnostic, I would say, about what could work, what is helpful here? Is it more writing centers, more coaching, more nudging? We didn’t know. And what we came out with is feeling quite sure that we needed to tackle the classroom. And in particular, I think what we wanted to think about was this untapped source of potential support or hindrance in that is faculty, and think historically, we know that many times we just think of the classroom as kind of untouchable, and we put other support centers, writing centers, and tutoring centers, and other supports for students. And we kind of leave the classroom alone and leave faculty, including ourselves… we’re both faculty… to do what they will. And instead, here we wanted to really think about, could we intervene with faculty to provide more support for students?
Rebecca: Yeah, it’s funny how we often overlook that particular option given that’s a key touchpoint with our students, right?
Michal: Exactly. So we came out deciding that we’re going to do an intervention that was classroom based, and that was going to try to work with faculty to give students more information about what it takes to be successful in the classroom, how to seek help. And then we decided to pilot that to see if the proof of concept worked. And we piloted it at a large, more selective institution on a small-scale pilot, and found some promising results and then launched it at full scale. This article describes that whole kind of research process as well, which we think is also an important contribution to the literature… to not just immediately do something, but to actually think about the way in which it might function and just really to understand from students what they tell us they need.
John: Your initial pilot used a light-touch intervention, could you describe that intervention?
Michal: So initially, in the pilot, what we did was a slightly underpowered pilot in the sense that we took students who didn’t complete their first homework assignment in a classroom where you have to complete a set number of homework assignments. And you could miss one, but historically, we knew that students who missed the first one often struggled in the class. And so that’s the point of randomization that we took for the pilot. That is we took those who didn’t submit their first homework assignment and to half of them we sent an email saying, “Hey, we noticed you did and submit your homework assignment. Just to remind you to do well on this course, here are some things that might be helpful.” We also provided some information on what’s coming up and reminding of the office hours and how to seek help. And then we did two others. Importantly, the two other emails provided information that showed the students that we knew how they were doing in the class so far. So after the midterm, and before the final, and we found again, it was underpowered, but we found positive effects among the treatment group. And then that was conditional on some other ex post characteristics that we added to the pilot, but then we decided to launch it at the CSU campus that we worked with at full scale across a random sample of introductory courses.
John: For those listeners who are not familiar with statistics, when you mentioned that this is an underpowered test, could you just explain that in terms that…
Rebecca: …Rebecca can understand? [LAUGHTER]
Michal: Absolutely. So they were underpowered statistically, to detect a statistically significant difference between the treated students, those who got the emails, and those who didn’t. And so for that, we need a large enough sample size of treated versus control students, particularly if we’re going to add other kinds of observations about them, like their gender, or their race, or their prior academic achievement. And so when we say something’s underpowered, we might see the positive effect that is better achievement in terms of the final or in satisfactory progress in the course. But that difference may not be large enough in statistical terms to consider it statistically significant, even if the mean differences are actually in the direction that you expect. So to get that, to be able to actually detect significant effects, you need a large enough sample size. And that’s when we launched into the population that we were specifically focused on, which was the population at this less selective campus.
Rebecca: At that less selective campus when you scaled things up, did you keep the intervention the same? Or did it change? What did that scale up look like?
Michal: Great question. So the first thing we did was we really focused on introductory courses. This was also piloted in an introductory course. But we wanted to focus on particularly large classes, especially because the information was going to come from the instructor and we were doing a randomized control trial, that is some students are going to get this treatment and others are not. So the class had to be large enough for it to not be weird that some students were getting it and others were not. If you’re in a classroom of 30, that might be strange if you’re talking to someone next to you, and they get this email from an instructor, but basically what we did is we recruited faculty, we randomly chose 30 large undergraduate courses. And then we recruited those faculty and said, “Will you be part of an experiment with us? …and here’s what you need to do.” And the important thing here is that we’re not trying to dramatically change faculty style in the classroom, we all have our own style, the own way we write a syllabus and what we expect from students. What we wanted, we had several key principles. The key principle first was that faculty need to directly communicate with the students showing them that they know who they are. So it very much said in an email, Dear Rebecca, or Dear John. They needed to provide information that was specific to their class, it couldn’t be quite generic. We provided them some templates, but the goal was for them to personalize them and say, “Here’s this upcoming unit, here’s what to look out for, here’s how I would study for it, Here are my office hours, and so it provided information. So the way we changed it is… it was a semester-long courses. and so we requested that faculty sent a minimum of three emails to students, one after the first assignment or exam, sometimes earlier if they didn’t have an exam, and two later, we wanted it within the first three weeks of the semester, one after the midterm, and one before the final. And the important thing is, in the second two emails, those were further personalized to sort of say, again, “Dear Rebecca, I see that you’ve gotten a 72 on your midterm, it’s not too late to improve your grade in this class. Here are the things that you could do. And so it was personalized also just showing that we know how you fared in this class. And so again, the goal was to let faculty, in their own words and in their own course formats, personalize these emails, with the principal being information to students, personalization to students, and help seeking behavior advice.
John: And this process is a personalization, was this done in a mail-merge type format? Because I would think to scale this intervention would be a lot easier if you did do it either using the tools within the LMS or using some type of a mail merge.
Michal: Great question. So again, this was a grant-funded study and where we could provide some support to faculty, some faculty didn’t need additional RA support from us and either knew how to do a mail merge, it really worked with their course management system, like Canvas or Blackboard and found it very efficient to work on the own. Others you may or may not be surprised, did request our help from our graduate students. And we did provide support including one actually helping a faculty member directly write individual emails for students to support. You’ll probably ask me how the faculty feel about this. And I will say we actually asked them how long it took. It didn’t take more than a minute an email and so we do kind of try to guesstimate the investment on the faculty’s time to do this, and it very much varies on their comfort level with the course management system.
Rebecca: In the scaled up version of this study, did you continue only interacting with students who had struggled and missed their first assignment or is that a shift from the pilot to the big study?
Michal: Yeah, thank you for catching that. No, it is a shift. We did open it up. We believed actually, theoretically, our priors were that anyone could benefit from this. So if you were a B or on the cusp, we have lots of researchers suggest students, especially in introductory courses, some students, particularly first gen students might take a B or a C as a signal that they shouldn’t be in a particular major. We really did want to encourage across the achievement distribution for everyone. As to John’s earlier question, as you scale this up, or as people have talked to me since this experiment said, what if I want to do this, but I teach 400 people, you could one year, one term, try it with your lower achievers, another term, try it with those at that C range, or others. And so we did in this initial intervention want to do it across the board.
John: How large was the sample on the scaled up version?
Michal: It was 20 faculty, some were in multiple classes, we had 22 classrooms overall, and roughly 3000 students.
John: Excellent. How large was the estimated effect in the scaled up version?
Michal: First, it’s important to note, as we’re talking about findings, that our findings are really concentrated on first- and second-year students or new students, and who are from underrepresented minority backgrounds, so URM students, and we find that their treatments are about five percentage points more likely to earn an A or a B in the course by comparison to control students. So again, just important to note, we find overall positive effects for the whole of treated students. But they’re only statistically significant for the URM students that we target, that our intervention aimed to focus on.
Rebecca: Was the impact limited to just the classes the students were in, or was there an effect beyond that individual class?
Michal: Yeah, so that’s a great question, and we do find what are called spillover effects. And that is that those same students, those URM students, had a positive effect of being in the treatment group, even in courses that were not part of the experiment. That positive effect was much smaller in magnitude, it was like three quarters the size of the effect of the actual treated course. But still statistically significant at the ,10 level.
Rebecca: it seems so easy, just three emails.
Michal: Yeah, it does. It takes an investment. But yeah, I think it does beg the question, I think, for me, and maybe this is something that you want to talk about a little bit later. But we do so many things to introduce first-gen students or to get students in the classroom and again to provide support externally, but we do tend to sort of assume that they’ll just survive or just be okay in the classroom without training faculty on how people might experience their classroom differently. And so, again, we do test for other subgroups. I focus on first-gen, because it’s a concept that is helpful for people to think about students who don’t have at home, at least, people to tell them what to expect in the college environment and how you might go to someone’s faculty office hours, and they’re not there, or they’re there, but they’re sort of like, “Yes, did you have a question about the material?” Whereas to know, you could go just to review the material or you don’t have to have a question, you can just show up, things like that. Being able to feel comfortable asking questions in class and others who don’t or just step in at the end of class and sort of say, I found this part of the lecture really complicated, will you be reviewing it again the next day? So things of that nature. And so I don’t think these emails did those, but they sort of remind us that there are things that faculty can do to remind students that they see them, that they see that they’re in a classroom, and that they know that they may be enjoying parts or struggling in parts, and that there are some actions that they can take to be more successful in their classroom.
John: Did the effect size vary with class size? Was there a more substantive effect in larger classes than smaller classes? Because I would think it might be easier for students to feel more lost in a larger section, especially as a first-gen student?
Michal: It’s a great question. We aren’t able to test that, our sizes were all pretty similar, and we didn’t have enough. We actually chose the largest classes that exist in this campus, which don’t get much bigger than 150. I think it is worth testing. Absolutely. I can tell you from the pilot, that, in particular, that was a class of 400 students, and both for the pilot and for the full scale up the types of emails we got, that faculty got back. So that’s one thing we could talk about is how did students respond more generally, and many students emailed instructors back, and in particular, in the pilot, but also in the scale up, we got those emails back from the faculty who were in our experiment, and they very much appreciated the email and said, “No faculty members ever emailed me before,” or “especially in a class of this size,” or “I so appreciate the email. I’m working really hard.” So the first and sort of overwhelming finding from these emails is just gratitude from students that a faculty member emailed them, particularly we noted that in the pilot when the class was quite so large.
Rebecca: Yeah, I can imagine that just something that feels personalized, whether or not it’s super personalized or not, just feels personalized, really helps students feel seen.
Michal: Exactly that. That’s right. I should also say in addition to just grades and you asked me about graduation outcomes, we also included a survey, both in our pilot and in the scale up, to try to get at some of the mechanisms and in particular, we asked questions like, “Did you feel this instructor supports your learning? Did you feel you could reach out to your instructor?” And we do find consistent with our intervention that students in the treatment group reported more positive outcomes on these dimensions.
Rebecca: That’s so fascinating, because it’s so easy. Like it just isn’t that complicated.
John: In your study, you also examined how this effect persists over time, which certainly relates to the graduation thing. What did you find in terms of the persistence of this effect over time,
Michal: We did look at long-run treatment effects. That is, we waited to see what happened several years later, we presented on this paper in the shorter term outcomes. And we tracked these students and worked with this institution closely, we really wanted to know did it affect the outcome we care most about, which is graduation? In other words, we care about student success in a particular classroom, and maybe they’re slightly better grade, or not dropping out of the course. But we really care about their longer term outcome of finishing. Again, for this specific group of interest. We note that the treatment results in that 7.3 percentage point increase in persistence, one semester later, and then a higher four percentage points difference in graduation. So we do find positive effects on the likelihood of graduation.
Rebecca: We’ve talked a bit about the impact on students and how students have responded, how did faculty respond to participating in this intervention?
Michal: First, I’ll just say, again, we had to recruit faculty to do this. And so we do track the faculty who said no, and we did as much reconnaissance work, if you will, to understand that we did need to self select faculty, keeping in mind that if we did self select faculty who just had a proclivity to help students or being this intervention, if anything, we perhaps under reported some of our findings, but we do as much effort to understand how representative our faculty are, which we did determine they are, and in the paper, there’s some details about that. And they represent faculty from a real diversity of disciplines, from music to engineering, across the board, humanities and social sciences, we had the whole range, we had the range of faculty. So you know, “I do this a little bit sometimes. And the two of you are, I would have said, I’d do the same with a student who doesn’t show up or doesn’t complete to sort of track them down. But I’ve never done it systematically. And I’ve always wondered if it even matters.” We had everything from that kind of faculty member to a faculty member that’s like, “Well, I’ll do this, but I don’t think it matters. I mean, at the end of the day, the students who want to put in the work put in the work.” And so we had a whole range in their attitudes. We did offer a stipend to do this, because we did believe it takes time. And we wanted to sort of show that faculty who do have a lot of demands, especially at teaching institutions, that this was going to take some time. And so we haven’t done it again without an incentive, or with an institutional incentive, that’s part of like performance evaluations or something. So that’s yet another thing that in terms of where to take this in terms of where institutions might take this, for our perspective, it was externally funded, they were only accountable to us in their efforts to do this. And so we talked to them multiple times in the term, again, some were in both waves of the study, because we did it over two terms. And then we surveyed them at the end and really got some details from them. Some of this is in the paper around how they felt. And I would say most expressed similar to what you expressed, Rebecca, which is like “Wow, this some emails and I made this difference, especially in underrepresented minority students lives and in their classroom, and it felt really good about the impact.” …keeping in mind, we talked to them also, before we knew the results. And so at that point, they just were sort of documenting how much time it took to do the emails, and what kind of emails they got in response from students and most felt, I would say, humbled by the thank yous that something so small, like an email, got so much gratitude back from students. We did have some faculty that sort of said, this takes a lot of time, and I’m not sure it’s much of a help. In our last survey with faculty, we actually provided them the full scale results and said, “Here’s, by the way, what we learned from the study,” and then asked them to respond or to reflect on that. And many said, “Wow,” like, again, similar to your reaction, “a few emails could make such a big difference, I will be sure to continue.” We ask them directly if they will and we report this in the paper. Most do say that now that they know the positive impacts of this study, they’re likely to continue with these emails.
Rebecca: I imagine the workload for a faculty member isn’t necessarily in drafting those initial emails, but maybe the responses to the emails [LAUGHTER] you might receive back.
Michal: It’s a good question. I don’t know how many continued the conversation once an email was sent, the standard emails, part of the experiment and the student wrote back and said, “Thank you so much.” I’m not sure they continued. We did a lot of qualitative coding, which we don’t report in the paper, we report some but then we did a lot more internally. And there definitely were a lot of hardships described among students who did reply, the extent to which faculty replied with those hardships offering extensions or any other kind of augmentation to their requirements, we don’t know.
John: In an introductory microeconomics course I use the Lumen Learning Waymaker package which actually does automate emails to students based on their performance on weekly quizzes and so forth. And even though the students in the class realized it was automated, they’d still write back to me and I’d respond to it. And they’d often apologize, saying, “I had a bad week and I know I need to work harder.” But it did often start a dialogue that might not have occurred otherwise. And I’ve often wondered how large that effect was, but because it’s automated for the whole class, it’s hard to measure the differential effect of that. So it’s nice to see this result, that that type of approach works.
Michal: Yeah, I think that what you’re describing is exactly right, this sort of feedback. Our whole intervention is built around theories, not just from kind of behavioral economics or nudging or information source, but on the education literature on feedback, and the important role of feedback, and the timing of it. And it’s most useful if it’s not just performative like feedback, like your grade on an assessment like a “C,” but that actually gives you more information about how you’re doing or what to do to improve. And so this kind of thing you’re describing John is exactly right. I think we know that more touch points with students through assessments, as opposed to all hanging on a midterm and final also support students to get more feedback about how they’re doing.
Rebecca:I think sometimes students know that they’re struggling, you get a grade back, you know, if you’re doing well or not, but I think a lot of students need more coaching around what to do to improve or to better understand how the grade is calculated, to just take the time and attention. It’s there. It’s in the syllabus. But sometimes they don’t realize what they should prioritize. And including some of that kind of messaging makes a lot of sense. I know that when I’ve done that with my students, they’ve been really appreciative because they didn’t realize that they were putting all their energy into something that didn’t really matter as much.
Michal: Yeah, that’s right. And I think coaching is a great choice of words around what to do with it. I also think many students come to our universities with really uneven or unequal preparation for those courses. And so I think a lot about students who came from a high school where they took an AP course in economics or chemistry that might as well have been a college level course. Many of these questions are great on a grade curve. And so that C might be an excellent grade for them, given the type of preparation that they had, but they don’t know that necessarily, and they might, to them, signal that maybe this isn’t the right major for them. And so I also think coaching around what to do with the grade when you’re kind of passionate about a subject and not to give up on yourself too quickly. Many are juggling jobs, we know for some, it’s their only work is to get through this term, and others are doing this while working and taking care of family members or whatever. And so that grade that they got often conveys information that we as faculty don’t necessarily know anything about how they’re interpreting,
John: I would think just a signal, as in the title of your paper, “my professor cares” might create a sense of connection and belonging that might otherwise be missing for someone who is a first-gen student who might not feel that they belong in the institution.
Michal: Yes, I agree. And I think there are more and more studies coming out, particularly in social psychology, but elsewhere about the importance of belonging. We know from the K-12 literature that it’s having a teacher who cares about you matters, actually. And again, nothing dramatic happens once you get to college. But we assume it’s completely different, where in fact, I think having an adult or particularly your instructor care, you feel a sense that that instructor cares about your learning, or how you’re doing in their class, irrespective of the grade per se, just that you’re making progress in the class or feeling comfortable in the class, I think is really important. And I think it’s hard to test. And most of the belonging literature has been on survey type research, “I feel like I belong here.” And it’s not as much in the classroom, although there’s increasingly more studies about belonging in the college classroom beyond just a university at large.
John: A while back, we interviewed Peter Arcidiacono, on a paper that looked at the impact of differential grading between STEM and non-STEM courses. One of the things you just said reminded me a little bit of that, because one of the things that was noted in that paper is that many of the people, particularly female and underrepresented minority students who switch their majors out of the STEM fields had some of the higher grades in the class, but it was below their expectations. And I’m wondering if this type of intervention might have an effect of letting them know that that performance in that discipline may not be all that bad. Since we’re probably not going to eliminate grading differentials between STEM and non-STEM disciplines, perhaps some type of personal communication might help preserve some of those connections so we don’t lose as many people in the STEM fields where the returns to education are the highest.
Michal: Absolutely, I think well said. That’s exactly right. And I think that is among the reasons we wanted to get across the grade distribution, not just those who are really, really struggling. And also because we do think that students might give too much meaning from a signal of a grade early on in their academic pursuits where they can get through a certain amount of courses and then maybe where the fun stuff of their major where they really see that utility of a particular course for a future career path matters. And so I think that’s right and I do think Peter’s work and other people’s work looking at the impacts of particular kind of grades and grading distributions or signals of grades, I think, are really important. I think that’s an area that’s blossoming in economics and in other fields to sort of better understand heterogeneity or differences between subgroups around how particular information like an assessment, grade or a test score.
John: One of the things we’ve been seeing in a lot of studies is that many changes in education, such as using more active learning techniques, providing more course structure, benefits all students but disproportionately benefits those students who are historically underrepresented. And it seems like this study just provides more evidence of that, that good teaching practices benefits everyone, but especially benefits the people who are most at risk in higher ed. And those are often the people who can get the greatest benefit by persisting in higher ed.
Michal: Yes, I think that’s exactly right, if we’re really going to address disparities in college outcomes, and I think one really important source to go to is the importance of information gaps. And that would be particularly for first-generation students, for students of color, but also for students who come from unequal K-12 backgrounds. Colleges and universities often know and often are recipients of systematically particular high schools in their states, especially public flagships, community colleges, others, and so they are aware, they offer relationships with those K-12 high schools that are feeders to their institutions. And that is an important source of information that they can provide to high school students as they enter college for a kind of a warmer handoff, if you will, but that also faculty teaching introductory courses can provide. And so I think, again, if our goal is to address inequalities that we see in college outcomes, then I think information, particularly for those for whom their information gaps, is particularly key. Students want to be seen and not necessarily yes, there’s the anonymity of a large lecture hall, that maybe don’t want to be called on. But that doesn’t mean you don’t want to know that your faculty member sees you and knows whether you’re doing well or struggling, or how you feel about the class or how to succeed on the next exam or in the next course in that sequence.
Rebecca: So if there’s other faculty who think, “Hmm, three emails, that seems easy,” What recommendations would you make to those faculty?
Michal: Yeah, I mean, I think what the first recommendation I would make is to try it, to do it. I think thinking about how you communicate in your syllabus that about your forms of communication are important. So if you’re going to do an email, I think one thing that we would have loved to test and if we were to continue further is the format for the information. And so I think letting also students know that you want to hear from them over email or through other means, I think is useful. So first, deciding on the kind of medium like how you’re going to communicate this, I think email makes sense. When faculty start texting students, maybe we’d move to a text them information. But that’s not the case for most of us, so it’s through the course management system or email. I would say focus on again, what we know from the literature on feedback is that for it to be as specific as possible about what students can do with this information, and so that is looking at your syllabus closely, knowing… we typically do as faculty… where students trip up in the material, what’s complex, what’s up ahead. And so giving that kind of feedback as well about how to prepare for the next assignment or exam, what has tripped people up in the past, what you know, might work for them is really helpful. Again, other research has suggested the importance of going to office hours might matter. But that means you need it to show up, you have to think about how you structure your office hours. Incidentally, we did try to track that… quite hard to do whether it actually promoted more office hours in the pilot, we believe it did promote more office hour usage. More broadly, it’s something we’d like to test, the actual help-seeking behavior of students. So I would say faculty should do it if you’re teaching a 400-person class and you can’t imagine doing this for 200 students or even 100 students, maybe start with students who you see as struggling based on that first assignment, as we did in the pilot and see what you can learn from that… maybe do as John suggests, which is kind of get savvy on a mail merge and think about ways to do this so it’s more efficiently done, so that you can reach as many students as possible.
John: We just switched recently to Desire to Learn’s Brightspace platform, and it does have intelligent agents and it does have replace strings so you can automate an email conditioned on the grade on either your overall course grade or on a specific item. And if you do it on overall course grade, (which I just set up, by the way, for my summer class last week), students get an email saying, “I see that you’re struggling, there are some things you might want to try.” It would be nice if I could put in their grade without having to go to mail merge, but I don’t think that would be possible. And I’d like to scale this up. In any case, I’d encourage them to contact me during my office hours or to make an appointment to talk to me. The first iteration of that went out last week. None of the students responded, but it’s a small summer class. So I’m curious to see how this might work. And your paper helped encourage me to try this. I had other reminders out there, but this was one that I thought might be useful, using a specific grade trigger.
Michal: That’s great.
John: We always end with the question. What’s next?
Michal: Well, I think our lab and Scott Carrell and I continue to do this work, and in particular, we’re also spending a lot of our time these days doing work at community colleges in California, which serves one in four community college students nationwide. So also persistence outcomes are quite weak at community colleges, historically, and we’ve seen real declines in enrollment at community colleges since the pandemic. And so we are definitely doing some work at community colleges. We continue to track and follow graduation rates, particularly inequality in graduation rates at CSU. And we’d love to launch another intervention. So stay tuned on that. I can tell you, we are quite committed to understanding the college classroom beyond college settings more generally, and so hope that the college classroom continues to be a source of important information for the field about how to better support student success,
John: You’re doing some really wonderful research. And it’s really nice to see some of the attention that this got because your article has been mentioned in The Chronicle. It’s been mentioned in Inside Higher Ed, and I’ve seen people tweeting about it ever since it came out. It’s good to see this research becoming popularized.
Michal: Well, we appreciate it, especially since Scott and I did not succeed on that front, that is becoming those people who do good social media. Other people are better at that than I am. And I’m always a little troubled when I talk to more junior faculty around “Do you need to do all that?” …and 10 years ago or so I would say “No, just do good work and it doesn’t matter.” And now I confess the sort of buzz that some people are better able to develop around their findings in their papers seems to matter and so it’s really nice when it happens to you because we didn’t do it ourselves. [LAUGHTER] So I appreciate my friends Sue Dynarski and others who’ve done a really nice job promoting this paper.
Rebecca: Well, thank you so much for joining us and for sharing your work with our audience.
Michal: My pleasure. Thanks for having me.
John: And we will include a link to your study in the show notes and we encourage our audience to read it.
Michal: Wonderful.
John: Thank you.
Michal: Thank you so much. I really appreciate it.
[MUSIC]
John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.
Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.
[MUSIC]