119. Faculty Incentives

If faculty were paid more when their students learned more, would student learning increase? In this episode, Sally Sadoff and Andy Brownback join us to discuss their recent study that provides some interesting results on this issue. Sally is an Associate Professor of Economics and Strategic Management in the Rady School of Management at the University of California at San Diego. Andy’s an Assistant Professor of Economics in the Sam M. Walton College of Business at the University of Arkansas.

Show Notes

Transcript

John: If faculty were paid more when their students learned more, would student learning increase? In this episode, we discuss a recent study that provides some interesting results on this issue.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together, we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guests today are Sally Sadoff and Andy Brownback. Sally is an Associate Professor of Economics and Strategic Management in the Rady School of Management at the University of California at San Diego. Andy’s an Assistant Professor of Economics in the Sam M. Walton College of Business at the University of Arkansas. Welcome.

Andy: Thank you.

Sally: Thanks. Great to be here.

John: Our teas today are:

Andy: I wanted to represent Fayetteville, so I went to the tea shop and I got what I have been told is the world’s greatest cup of Earl Grey tea. [LAUGHTER] It’s an award winning cup. They promised me this. [LAUGHTER]

Rebecca: Does it taste award winning?

Andy: I haven’t had enough of it yet. [LAUGHTER]

Rebecca: Reserve judgment?

Andy: I don’t give these awards out lightly.

Rebecca: And a nice lineup of mugs on your desk too.

Andy: Yes, many, too many. So this is just a way I avoid doing dishes. [LAUGHTER]

John: And Sally?

Sally: I’m drinking coffee but I’m on California time, so I’m excused.

Rebecca: And I’m drinking Spice of Life today, a white tea, John.

John: Pretty good.

Rebecca: Unusual, right?

John: And I’m drinking Oolong tea

Rebecca: You’re drinking nothing cause you forgot the cup of tea. [LAUGHTER]

John: . If I remember where I put it, I think I may have left it in the office before I came over here. But I did make a cup of Oolong tea and I did have a sip of it before and I will have it right after this.

Rebecca: I intended to drink tea. [LAUGHTER]

John: We invited you here to talk about your forthcoming article on improving college instruction through incentives. Could you start by giving us a general overview of this study?

Andy: Our study, we partnered with a large community college in Indiana called Ivy Tech. And what Ivy Tech wanted to do was incentivize instructors based on student performance. At the same time, they were rolling out a new set of large end-of-semester comprehensive, and importantly, objective exams. And so we were able to partner with them to use those exams to incentivize instructors based on the outcomes of students. So, that’s kind of the high level overview of what we were doing. I know we’ll get into more detail in a bit.

Rebecca: Can you talk a little bit about what motivated the study in the first place?

Andy: Yeah, absolutely. So, community colleges are obviously really important. It’s thought of as a sort of pathway to the middle class. At the same time, the rates of success at the community college level have been relatively low. And so if we think of community colleges as a particularly good tool for upward mobility, then it needs to be the case that they achieve better outcomes. And with the low current rates of success, it also leads to long times of accruing debt without receiving the benefits of these higher incomes from having that college education. So, there’s a whole host of factors that are kind of coming into play to make these both important and potentially underachieving tools for upward mobility. And then the other side of the equation is also that the faculty at community colleges are predominantly or at least, there’s a large percentage of adjunct faculty with really low pay and sort of what could be seen as an unsustainable business model where you’re relying on people to work in short-term, non-guaranteed contracts regularly and teach these classes. So, we wanted to address both sides, both the student achievement side, as well as the sort of personnel side of the community college setting.

John: And in terms of student success, specifically, I think you’re referring to the proportion of students that move through to a four-year degree program as being lower than what students intended. Is that the primary metric?

Andy: Yes, that’s one of the primary metrics. You can think of the community colleges as having two goals: one being graduating students with associate degrees and another being transferring students to four-year degrees. Now, Sally will know the exact number, but a large percentage of students attending community colleges, I forget what the number is, but their ultimate goal is to eventually transfer and graduate from a four year-college with a bachelor’s degree. So, there’s kind of two ultimate goals. In the process of achieving those goals there’s also gains from simply taking additional classes or receiving accreditation in certain skills, and that’s something that a lot of people go to community college to do. But, our primary long-term concerns are graduation rates and transfer rates.

Sally: Yeah, I think it’s really fascinating. Most of my work up until now has been at the K-12 level. And I think most economists, if you look at education economists, there’s a lot of focus on the K-12 level and looking at teacher quality at the K-12 level and how can we improve teacher quality at the K-12 level? When we came to the college level, there’s been work showing how important it is who your instructor is. Instructor quality matters a lot. But we couldn’t find any work looking at how can we improve instructor quality at the college level? I think it’s really interesting because community colleges are getting a lot of attention from policymakers because they’re low cost, they expand access to underrepresented populations that normally don’t have as much access to college: minority students, students who are first generation college goers, students who are working and so they can’t travel necessarily to go to a college. And so we think that community colleges provide amazing opportunities to students, but as Andy was saying, they really struggle with success rates. And so 80% of students entering a community college say they intend to transfer to a four-year school and fewer than 30% end up doing so. Fewer than 40% of students graduate with any kind of degree within six years. And so these colleges, and we see this working with Ivy Tech, they are incredibly dedicated. The administrators and the teachers there are incredibly dedicated, but they’re working with students who are struggling, nd so there’s a lot of room for improvement. And what we found actually that’s interesting, I think, at community colleges, is that there’s actually more room to think about how to structure employment contracts than there is at the K-12 level. Because often, the instructors aren’t unionized, as Andy was saying they work under these short-term, flexible contracts. And so there’s a lot of flexibility. And really, people haven’t thought much about how to structure these contracts in a way that can improve performance and motivate both instructors and students.

John: It’s a fascinating study. For those of our listeners who aren’t familiar with field experiments, could you tell us a little bit about what a field experiment is?

Andy: Yeah, absolutely. So a field experiment is, in our case, a test of policy. And the way it’s experimentally designed is through what would be known as a randomized controlled trial, meaning that you take a sample of people from a population and you split that sample into a treatment and a control group, and you do this randomly… and that’s the really important part. Because if you test a policy with an assignment that’s anything but random, then you can’t guarantee that these two groups are otherwise equal. But in our case, we’re going to randomly assign people to be in the treatment group or the control group. So, the treatment group will receive the policy, the control group will continue in the current status quo. And then what we will do is look at outcomes and how they differ between the two groups. Now, since the assignment to the two groups is random, again, there’s no mechanical correlation between treatment assignment and any of the characteristics of the groups themselves. Then we can know that any differences subsequent to the assignment are results of the treatment itself and not any sort of spurious correlations or selection biases.

Sally: Yes, I think listeners are probably familiar with this kind of experiment when you think about testing a drug or a vaccine, those kinds of clinical trials. And more and more economists have brought those models in for testing policies. And I think they gained a lot of attention recently because of the recent Nobel Prize, which highlighted how powerful these experiments can be for evaluating policies. And so I think that they gained a lot of attention from economists, they’re growing in their use, and it’s really thanks to partners like Ivy Tech that are willing to let us come in and test things in this way. Because, I think although people are very comfortable with the idea of testing a drug in a clinical trial, sometimes there’s discomfort with testing policies in this randomized way. And so we’re really grateful when we have partners who are willing to let us come in and try these new policies and implement them in this randomized way where some instructors receive incentives and some won’t.

John: And in a sense, we’re always testing things. It’s just, we don’t always measure the effect of it. When you something new in your class, you are doing an experiment. But unless you have a control group to compare it to, you can’t really assess whether the gain is due to that particular intervention or something else that was happening.

Sally: That’s exactly right and we really try to emphasize to people exactly that, that you’re always trying things, rolling out new policies or stopping one thing and doing it differently. And if you’re going to be making these changes, do it in a way where you can learn from them instead of just trying something, trying to step back and try to understand whether it worked or not. How do you know whether something is working or not unless you can compare it to a proper control group?

Andy: And just to emphasize the importance of this methodology, there’s a lot of policy that gets rolled out based on bad data and bad evidence. And so if you’re using a poorly designed experiment, or simply looking at correlational data and rolling out policy, what you could be doing might not be effective, it might be actively detrimental to students. But once you have this clear causal evidence, we can be really confident in the policies we roll out and understand the cost-benefit analysis of the policies prior to implementation.

Rebecca: Can you talk a little bit about the policy that you were testing in this particular experiment?

Andy: Yeah, so as we talked about, we wanted to roll out incentives for instructors based on student performance. And we base these incentives on objective, comprehensive exams for a variety of courses in a variety of departments. The exams are designed outside of the classroom in the sense that it was designed by deans and department heads and represented the types of material that they wanted the students to master by the end of the semester. So, those form the basis of our incentives that we would be giving to instructors. Now, we didn’t just want to offer incentives based on outcomes. We wanted these to be potentially as powerful as possible. So, we leveraged an approach that Sally’s researched in the past in a paper with Roland Fryer, John List, and Steve Levitt, where they looked at loss contracts.

Our incentives were actually such that every instructor would receive $50 for every student who passed the exam, and passing the exam is defined as receiving a 70% or higher on the exam.
So, we framed these as losses. And we delivered incentives at the beginning of the semester, as if half of the students in an instructor’s course had passed the exam. Now, this established it as sort of a target, but it also allowed us to leverage this idea of loss aversion, that instructors would value keeping money potentially more than they value gaining an equivalent amount of money. So, as the students progressed through the semester, at the end of the semester they would take this exam, we would have these objective evaluations for how many students passed the exam, and then we calculate their final payments. If their final payments exceeded this initial payment, they would receive additional payments. If their final payment was less than this initial payment, we would clawback some of that payment. And this was all explained at the outset of the experiment. And again, this sort of loss framing is leveraging a long line of research in behavioral economics, about how much more motivating it can be to face potential losses than equivalent gains.

Sally: Yeah, so just to give an example, if you have 20 students, and you get $50 per student who passes, half of your students passing, that would be $500. So we would send you a check for $500 at the beginning of the year, the beginning of the semester. At the end of the semester, if fewer than 10 of your students pass the exam, say only eight students pass the exam, you have to write us a check back for $100. If more than 10 of your students pass the exam, say 12 of your students pass the exam, then we send you a check for an additional hundred dollars. And we found in previous work that having this money in your bank account and knowing that you potentially could lose it if your students don’t pass the exam can be very motivating, compared with rewards that you only receive at the end of the semester.

Andy: Yeah. And one point about the logistics real quick is that these initial targets were based on enrollment as of what they call the census date. It’s not the drop deadline in the sense that you can’t drop afterwards, but it’s the deadline at which point dropping a course is no longer costless. All the students at this point in the course are enrolled sort of formally, and instructors will receive the upfront incentives based on that number of students. So, there’s multiple margins at which the instructors can influence student outcomes.

John: One thing I think that’s probably worth noting is that one advantage of doing it in a community college is that it’s much easier to have that standardized testing. I know in a lot of four-year colleges, faculty would object to having to assign an externally designrf exam at the end of the term, while in community colleges that type of standardization is much more common, which makes it a bit easier to design a study like this, I would think.

Sally: Yeah, that may be the case. Interestingly, I think, even for accreditation, for example, often you need to show that the test has certain questions on it. I know in large classes with many sections, they often write the exam together. The goal at Ivy Tech was to sort of create this bank of questions that every year tests would be drawn from, and I think moving classes over to that model is interesting. And there’s more openness to it than I thought. So, for example, when we started this study, I thought, “Oh, the only courses we’re going to get in this study are going to be math and maybe some science courses.” And what’s really interesting to me about this study, is unlike at the K-12 level, where it’s primarily focused on math and reading, we have a really wide range of courses. We have anatomy and physiology, art history, nursing, psychology, criminology, sociology, psychology. And so what it showed to me was that you can really get a wide range of courses into this kind of framework. And it doesn’t cover every element of the course. But, for example, in the English courses, one thing they were moving toward was evaluating the essays in a more objective way where you’d have two readers that would both rate the essays and compare ratings. And as colleges move toward those models, I think that this kind of framework will be more and more implementable.

John: It’s certainly good for assessment, and it’s certainly good for evaluating the effectiveness of innovations in instruction. There’s a lot to be said for it. I’m just thinking, at my college I know in many departments there’d be some objections to this. We used to have a standardized common final in the economics department where I teach and people objected to that for a long time, and we eventually moved away from it, but we are talking about doing something similar with at least some subset of questions that would be standard, for that sort of purpose.

Sally: Right. And I think always a concern about these kinds of studies is if the incentive is based on the objective part of the exam that can be tested and assessed in that way, does it take away from the other parts of the course that are more qualitative or more specific to each instructor? And so one thing we were really careful about in this study was to look at not just performance on the test, but how did students do in the class overall, how did they do on the other courses they were taking at the same time? How did they do in future coursework? And I think that’s really important that it’s not just all about teaching to this one assessment that’s going to be used for the incentive.

John: Given the strong findings on loss aversion in terms of how people find losses much more painful than gains of equivalent value, how did faculty react to that incentive structure? I believe you surveyed them on that early on, and then again later.

Andy: Yes, at the outset or at the baseline, the faculty did not like the idea of these incentives. This is both evidence-based where we have survey information and people were willing to sacrifice a rather large amount of money to have these contracts converted into contracts that were gain-based contracts that wouldn’t be paid out until the end of the semester. Anecdotally, this fits with my experiences, I went to explain these contracts. There was quite a bit of pushback in asking why these were framed in this way, and some people potentially wanting to approach them differently. Interestingly, this was very heterogeneous across departments. The accountants were like, Okay, well, I know what to do with this, [LAUGHTER] and put it away, and the psychologists were particularly upset because they knew exactly what we were doing. But, the data show that with experience, our treatment group, on average, has no preference between a loss contract and a gain contract, meaning that a large amount of this distrust of the contract could be attributable to just a lack of experience with this style of contract. And that as instructors gained more experience, they also gained a comfort level with the contracts as well.

John: I still wouldn’t rule out loss aversion as being a factor, but it is interesting that it gets reduced after they’ve experienced it.

Andy: Oh, absolutely. So, that’s not to say that loss aversion isn’t still a factor. But, as you gain experience with these contracts, maybe you start to appreciate the motivating qualities of loss aversion. So, maybe you understand that although these contracts cause you to work harder, or cause you to exert more effort around a certain goal, that by increasing that effort, you’re actually achieving greater outcomes for yourself. And if that’s the case, then they’re still motivating you through loss aversion, but you may not be as averse to the contracts as you were ex ante.

Sally: Yeah, so it may be that people are using them as a type of commitment contract where they know that yes, it will be painful while I’m in the contract, but it’s a way to motivate me to work harder, and I’ll walk home with more money than I would otherwise.

John: Just a couple of months ago, we did a podcast on commitment devices with Dean Karlan…

Sally: Oh nice.

John: …and we talked a little bit about that, and StickK.com, the site he created for that. Now, we’ve talked a little bit about the incentives for faculty, but you also introduce an incentive for students. Could you talk a little bit about that as well?

Andy: Yeah. So, on the student side, this was only in the spring semester. We rolled it out in the fall semester, where we had a pure control group and instructor incentives only. As we moved to the spring, we then cross randomized those two groups with student incentives. The students were incentivized with the following possibility. If they pass the exam, that is receive a 70% or higher, they would get a voucher for free tuition for a summer course. And this could be worth up to about $400 worth of tuition. So, now students are incentivized alongside the faculty. And we wanted to test whether 1. student incentives were effective and 2. if they made the instructor incentives even more effective.

Sally: Yes, we were interested in whether there’s complementarities between student incentives and instructor incentives. We knew from prior work that offering student incentives alone has, at best, modest effects. But, we thought that maybe if we put them in combination with instructor incentives, we could imagine the instructor saying to the students, “Look, guys, you guys have something at stake here too…” and it could create this positive cycle.

Rebecca: So can you tell us a little bit about the results?

Andy: That’s on page 22. [LAUGHTER] We found that the instructor incentives were really effective. They increased student outcomes by about 0.2 standard deviations on those exams. It’s a really nice effect in this literature. What’s also exciting is, suppose you don’t believe our tests or don’t like our tests, they also reduce course dropouts by 3.7 percentage points, which is about a 17% decline in the course dropout rate. They raised grades in the course by over a 10th of a standard deviation. And that’s even if you take out the effect of the exam itself, the course grades still go up by about a 10th of a standard deviation. And these positive results spill over into other courses. They complete other courses at higher rates, they accumulate more credits, and they even go on to transfer at higher rates. So, that’s in the faculty incentives or the instructor incentives branch of the study. When we look at the student incentives by themselves, we see essentially no effects on any key outcomes that we care about. When we look at them in combination, they actually don’t improve the impact of instructor incentives. If anything, we see a pretty small negative effect that wouldn’t be any significant difference at all. But, there simply doesn’t seem to be any impact of the student incentives. Now, this could be attributable to our specific student incentives. But, you’d have to believe essentially that they have either no value or very limited value to say that it’s just the fact that we’re incentivizing students in a very specific way.

John: When you first were talking about it, one of the things that struck me as… I think it was W.C. Fields who was talking about a contest where he said the first prize was a week in Philadelphia. Second prize was two weeks in Philadelphia. [LAUGHTER]

Sally: So, Andy and I are doing a separate study on summer school. And we do find that students do not want to attend school in the summer. But, interestingly, if we can get them to attend school in the summer, it has a really big impact on helping them graduate sooner. So, we’re really fascinated with understanding how we can address this aversion to summer school. But, that may be for another podcast. But,we agree that, I think that the incentive for students may not have been very motivating. I think just to return to the results about the instructor incentives, I think there’s some really interesting results there. First, something that’s unique to the college setting that you don’t find in the K-12 setting, is this really large problem with students enrolling in a course, paying for the course, and then not completing the course. So, about a quarter of students fail to complete courses that they’ve enrolled in and paid for. And this is a big struggle at community colleges. So, just increasing these rates of persistence in the course we think has a really large impact. And what it seems like is happening is instructor incentives get students to keep coming to their course, and so students go to their other classes as well. And so it has this really positive reinforcement effect on students completing all of their courses that they’re taking that semester. I think another really exciting result is that a year after our program ends, when we’ve stopped giving anybody incentives, you see these really large impacts on transfers to four-year schools… about a 20% increase in the rate of transferring into a four-year school, which we think is really exciting, which is the primary goal… as we talked about the primary goal of community college is to get these students to transfer to four-year schools. They really struggle with that. And so we see that this could have a really large impact.

John: And education is costly. And if we get more people finishing, the private and social returns, both go up significantly. And the cost of doing this is relatively low. It’s substantially less costly than the student intervention.

Sally: Yeah, it’s incredibly low, about $25 per student. One thing that’s interesting, again, about community colleges, because adjunct faculty are not paid very well, you can offer relatively cheap incentives that represent a significant bonus. So for these adjunct instructors, the average bonus represented a 20% increase on their baseline salary. Our adjuncts are making about $1,700 for a 16-week course. So, you can get a lot of bang for your buck with adjunct instructors, and we see the largest impact among adjunct instructors. Those are the instructors that really responded to the incentives. And adjunct instructors are increasingly becoming the model for schools, not just community colleges, but four-year schools as well. So, they represent about 50 to 80% of instructors at four-year and two-year schools, respectively. And that’s on the rise. So we expect that to increase in the future.

John: And that’s another topic we actually address in a podcast that was released on December 18.

Andy: So, I think the adjunct effect is also one that’s worth emphasizing, just because of the model of using adjunct faculty or increasingly using adjunct faculty is unsustainable at the current pay rates. So, if we think about these contracts as being more flexible as these adjunct instructors are more used to working on temporary contracts, if it turns out to be the case that you can’t continue to pay people such small amounts for so much work, then how do you design contracts in the future that can maximize student outcomes? So, if we’re in a world where we know we have to redesign these contracts, what we wanted to be able to do with this study is say, “This is a way you can redesign the contracts and achieve the outcomes that you hope to achieve.”

John: That works well, when the test is administered or designed externally. There would be some incentive issues, though, if the instructors had more control over the test or that assessment of how well their students did, I would think.

Andy: Yeah, absolutely. And that was at the front of our minds while we were designing the study was, “Are we not simply motivating people to either teach to the test, or to lie to us outright,” and based on the way the exams were designed, these are both objective and for the most part, externally graded. So, it’s still possible, for example, for a teacher to just erase answers and write in the correct answer if they wanted. But, there’s a certain point at which you have to start trusting your subjects, that they’re not attempting to deceive you. And so we kept that sort of in mind as we were thinking about how to design the study.

Rebecca: Did you have any feedback from faculty at the end of the study, when they discovered that your incentive worked, for example?

Andy: So, we have been in touch with our partner in the administration, we haven’t been in touch with the faculty themselves with our working paper or now the forthcoming paper. So, we hadn’t gotten feedback at that point. We did get feedback in the process of the study that is like at the end of the fall semester, and at the end of the spring semester, and just like the preferences for these contracts, the feedback was, of course, not universally positive. But for the most part, the majority of people appreciated the extra money. And I guess this is something that we haven’t emphasized yet, but we didn’t really change anyone’s contract, they were still operating under the existing contracts. And these served as a bonus on top of those contracts. So, there was very little room to think of these as sort of a really detrimental change toward your contract. Because the worst-case scenario is that you were under the exact same contract as you were previously.

John: If everybody failed, or if everybody came in below the threshold.

Andy: If literally zero percent of your students were able to pass this exam, you were in the same world you were previously.

Sally: We had high rates of sign up in the fall, and then even in the spring semester, there were people in the fall who hadn’t signed up that chose to sign up when they had a chance again, and all but one instructor continued the study from the fall to the spring. So, I think that instructors did like participating and we generally got positive feedback.

John: So, you got really strong results for the incentives for instructors with larger results for the lower-paid instructors… for adjuncts. Was there any evidence of the mechanism by which this affected student outcomes?

Andy: So, we look into mechanisms in two ways. One, we look at self reports of time use. And we really don’t see any significant differences between the treatment and control groups. So nothing that would clearly identify a change in behavior. Now we have one caveat to this, and that’s that when we put the time-use survey out, we limited each activity to 16 hours, not thinking how many of our instructors might spend more than 16 hours on a given activity. And that was made pretty obvious with the outside-work option. And so it is possible that we are top coded there and unable to differentiate between the two. And we also look at student evaluations, and we don’t see any significant differences between the way students evaluate instructors that were in the treatment versus the control group. So, we don’t really see a specific mechanism that’s driving these differences in student outcomes. And if we really wanted to try to isolate these things, we would need to maybe have some better or more objective data about instructor practices or a more fine-grained approach to looking at time use, I think.

John: That could be an interesting follow-up study.

Sally: Yeah, I think now that we’ve shown that these incentives work and can be very powerful, getting inside the black box of the mechanisms is our next step. And we’re currently working with an online university where everything instructors do and everything students do is passively recorded because they’re interacting online. And we think that will give us more fine-grained data. If you think about it… If I asked you last week, “How many hours did you spend on email? How many hours did you spend prepping your course?” It’s really hard to recall that without a lot of noise in there. And I think the other thing we discovered after presenting the results, talking to instructors, talking to administrators, talking to other people who work in this area, is that a lot of it might not be captured by time spent. Some of it might be… you learn the names of the students in your class… when you saw a student in your class who was on their phone, instead of letting them be on their phone, you said, “Please put your phone away, please close your laptop.” And so it might be much more subtle practices that we need to either observe classrooms or do focus groups or really get more qualitative data. And that’s something we’re really interested in doing.

John: Because it could be motivational, it could be that instructors who know that they’re going to get paid more might put a little more effort into those things that may not be captured by those measures. One hypothesis I was thinking is that it could also be that the existence of the incentives might perhaps encourage people to develop a growth mindset. And there’s a lot of evidence that faculty that have a growth mindset tend to have students that do better, or at least that have narrower performance gaps.

Sally: That would be really interesting, I think, for evaluating. We’re already surveying instructors at baseline and throughout and so we could see if the characteristics of the instructors change or their attitudes. We do ask them their attitudes about teaching and their view of students. For instance, questions like “most of my students achievement is determined by background” or “I’m able, with enough effort, to change how my students achieve.” And so we can look more closely at those questions. We use them mainly as baseline questions to characterize teachers about their attitudes. I don’t think we’ve looked to see whether their attitudes change. So that might be an interesting approach, we should take a look at those data.

Andy: One other mechanism that’s opened up by our incentives is that what we’re doing is essentially giving people a big influx of cash at the beginning of the semester. And so this could also just open up resources or capacity constraints that they had without these incentives. So for example, you could imagine someone who’s also working part time, who now gets a check at the beginning of the semester based on all of these potential student gains and doesn’t have to spend as much time working in their other job. Things like that could be potential mechanisms and could also explain why adjunct faculty have this really large differential effect. But again, we don’t have that hard data. And so it’s something that’s really interesting to us. But, unfortunately, not cleanly identified by our data.

Sally: One thing that we received is an unsolicited text message exchange between an instructor and their student, which I thought was interesting, because my students don’t have my cell phone number. But, things like that, giving out your number, exchanging text messages, the sort of individual support that I think, especially for community college students who may be less connected to campus, less connected to the community, could be really important. And so we want to think more about that sort of sense of connection to the community, to your instructor, to your fellow students.

Rebecca: I’m really excited to find out what your next round of studies reveals, because you have interesting directions that you can go in right now. And then really valuable information that you’ve already discovered.

Sally: Yeah, I think another interesting direction that we’re very interested in… is Andy’s talked about this model of being sustainable, especially as schools move more and more over to this adjunct model. So, another thing we want to understand is if a school offers these kinds of incentives, what kinds of people do you attract? Are you better able to retain your high quality instructors? Do you recruit higher quality instructors? So, that’s another question we’d really like to answer in future studies.

John: Because you’re offering higher pay to the faculty that are more effective, which could have an interesting self-selection effect on the faculty composition.

Sally: Exactly.

Andy: Yeah. And if anything, our results suggest that it takes a little bit of experience with these contracts to really appreciate them. So, moving to a model where you have these types of contracts, there might be a transition period where it was challenging before it became something that people understood as beneficial to themselves.

Rebecca: And not just to themselves, to the bigger educational community. Yeah.
So, we always wrap up by asking, what’s next?

Andy: I can talk about a project Sally and I are working on right now, as we talked about earlier, summer enrollment was seen as this potential mechanism to drive student success. And so we did a really simple experiment where we just randomly assigned people to receive a free summer course and then tracked their outcomes for the two years subsequent to that summer course. So, we’re wrapping up a working paper on that. And it looks like summer has this really nice long-term effect that would be kind of hidden in the short-term data because of the fact that you don’t see impacts on retention between spring and fall. But, you do see these impacts on credit accumulation in the short run and then graduation and transfers over these shorter windows as well.

Sally: So, I think as behavioral economists, something that Andy and I are really interested in is the intersection between preferences for contracts, preferences to attend in the summer, and the impact of those kinds of contracts on your future outcome. For example, we find that instructors don’t really like these loss contracts, but they perform really well under them. We find that students don’t really want a summer scholarship, but it has a really big impact on their future outcomes. And so trying to understand this intersection of your preferences for the here and now, and how these things may or may not translate into your future outcomes, is something that I think will be really interesting for future research.

John: This is a topic we keep coming back to in other contexts, that in terms of student metacognition, that the approaches that we know are most effective for learning are the things that students tend to value the least, and tend to perceive as being less important. So this is a pretty general problem, I think.

Andy: And isn’t there data showing how students give worse evaluations to teachers that cause greater amounts of learning?

John: There was that Harvard study a few months ago in a physics program there, where they found that students believed active learning to be less effective in terms of their learning. And yet the students who were exposed to active learning techniques ended up with larger learning gains. And that was also a randomized control trial.

Andy: Yeah.

Rebecca: People just don’t know what’s good for them.

Sally: But it’s hard, because I was trained at the University of Chicago. I am a behavioral economist, but I’m also a University of Chicago economist. And I believe in respecting people’s preferences and their choices. And so we have to be very careful about how to sort of take these complex and think about how to translate them into policy.

John: In terms of gentle nudges that work well.

Rebecca: Well thank you so much for joining us, it’s been really interesting.

John: It’s always better when there’s economists on.

Rebecca: I’m always outnumbered.

John: This has been fascinating, thank you.

Andy: Thank you.

Sally: Thank you so much for having us.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]