119. Faculty Incentives

If faculty were paid more when their students learned more, would student learning increase? In this episode, Sally Sadoff and Andy Brownback join us to discuss their recent study that provides some interesting results on this issue. Sally is an Associate Professor of Economics and Strategic Management in the Rady School of Management at the University of California at San Diego. Andy’s an Assistant Professor of Economics in the Sam M. Walton College of Business at the University of Arkansas.

Show Notes

Transcript

John: If faculty were paid more when their students learned more, would student learning increase? In this episode, we discuss a recent study that provides some interesting results on this issue.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together, we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guests today are Sally Sadoff and Andy Brownback. Sally is an Associate Professor of Economics and Strategic Management in the Rady School of Management at the University of California at San Diego. Andy’s an Assistant Professor of Economics in the Sam M. Walton College of Business at the University of Arkansas. Welcome.

Andy: Thank you.

Sally: Thanks. Great to be here.

John: Our teas today are:

Andy: I wanted to represent Fayetteville, so I went to the tea shop and I got what I have been told is the world’s greatest cup of Earl Grey tea. [LAUGHTER] It’s an award winning cup. They promised me this. [LAUGHTER]

Rebecca: Does it taste award winning?

Andy: I haven’t had enough of it yet. [LAUGHTER]

Rebecca: Reserve judgment?

Andy: I don’t give these awards out lightly.

Rebecca: And a nice lineup of mugs on your desk too.

Andy: Yes, many, too many. So this is just a way I avoid doing dishes. [LAUGHTER]

John: And Sally?

Sally: I’m drinking coffee but I’m on California time, so I’m excused.

Rebecca: And I’m drinking Spice of Life today, a white tea, John.

John: Pretty good.

Rebecca: Unusual, right?

John: And I’m drinking Oolong tea

Rebecca: You’re drinking nothing cause you forgot the cup of tea. [LAUGHTER]

John: . If I remember where I put it, I think I may have left it in the office before I came over here. But I did make a cup of Oolong tea and I did have a sip of it before and I will have it right after this.

Rebecca: I intended to drink tea. [LAUGHTER]

John: We invited you here to talk about your forthcoming article on improving college instruction through incentives. Could you start by giving us a general overview of this study?

Andy: Our study, we partnered with a large community college in Indiana called Ivy Tech. And what Ivy Tech wanted to do was incentivize instructors based on student performance. At the same time, they were rolling out a new set of large end-of-semester comprehensive, and importantly, objective exams. And so we were able to partner with them to use those exams to incentivize instructors based on the outcomes of students. So, that’s kind of the high level overview of what we were doing. I know we’ll get into more detail in a bit.

Rebecca: Can you talk a little bit about what motivated the study in the first place?

Andy: Yeah, absolutely. So, community colleges are obviously really important. It’s thought of as a sort of pathway to the middle class. At the same time, the rates of success at the community college level have been relatively low. And so if we think of community colleges as a particularly good tool for upward mobility, then it needs to be the case that they achieve better outcomes. And with the low current rates of success, it also leads to long times of accruing debt without receiving the benefits of these higher incomes from having that college education. So, there’s a whole host of factors that are kind of coming into play to make these both important and potentially underachieving tools for upward mobility. And then the other side of the equation is also that the faculty at community colleges are predominantly or at least, there’s a large percentage of adjunct faculty with really low pay and sort of what could be seen as an unsustainable business model where you’re relying on people to work in short-term, non-guaranteed contracts regularly and teach these classes. So, we wanted to address both sides, both the student achievement side, as well as the sort of personnel side of the community college setting.

John: And in terms of student success, specifically, I think you’re referring to the proportion of students that move through to a four-year degree program as being lower than what students intended. Is that the primary metric?

Andy: Yes, that’s one of the primary metrics. You can think of the community colleges as having two goals: one being graduating students with associate degrees and another being transferring students to four-year degrees. Now, Sally will know the exact number, but a large percentage of students attending community colleges, I forget what the number is, but their ultimate goal is to eventually transfer and graduate from a four year-college with a bachelor’s degree. So, there’s kind of two ultimate goals. In the process of achieving those goals there’s also gains from simply taking additional classes or receiving accreditation in certain skills, and that’s something that a lot of people go to community college to do. But, our primary long-term concerns are graduation rates and transfer rates.

Sally: Yeah, I think it’s really fascinating. Most of my work up until now has been at the K-12 level. And I think most economists, if you look at education economists, there’s a lot of focus on the K-12 level and looking at teacher quality at the K-12 level and how can we improve teacher quality at the K-12 level? When we came to the college level, there’s been work showing how important it is who your instructor is. Instructor quality matters a lot. But we couldn’t find any work looking at how can we improve instructor quality at the college level? I think it’s really interesting because community colleges are getting a lot of attention from policymakers because they’re low cost, they expand access to underrepresented populations that normally don’t have as much access to college: minority students, students who are first generation college goers, students who are working and so they can’t travel necessarily to go to a college. And so we think that community colleges provide amazing opportunities to students, but as Andy was saying, they really struggle with success rates. And so 80% of students entering a community college say they intend to transfer to a four-year school and fewer than 30% end up doing so. Fewer than 40% of students graduate with any kind of degree within six years. And so these colleges, and we see this working with Ivy Tech, they are incredibly dedicated. The administrators and the teachers there are incredibly dedicated, but they’re working with students who are struggling, nd so there’s a lot of room for improvement. And what we found actually that’s interesting, I think, at community colleges, is that there’s actually more room to think about how to structure employment contracts than there is at the K-12 level. Because often, the instructors aren’t unionized, as Andy was saying they work under these short-term, flexible contracts. And so there’s a lot of flexibility. And really, people haven’t thought much about how to structure these contracts in a way that can improve performance and motivate both instructors and students.

John: It’s a fascinating study. For those of our listeners who aren’t familiar with field experiments, could you tell us a little bit about what a field experiment is?

Andy: Yeah, absolutely. So a field experiment is, in our case, a test of policy. And the way it’s experimentally designed is through what would be known as a randomized controlled trial, meaning that you take a sample of people from a population and you split that sample into a treatment and a control group, and you do this randomly… and that’s the really important part. Because if you test a policy with an assignment that’s anything but random, then you can’t guarantee that these two groups are otherwise equal. But in our case, we’re going to randomly assign people to be in the treatment group or the control group. So, the treatment group will receive the policy, the control group will continue in the current status quo. And then what we will do is look at outcomes and how they differ between the two groups. Now, since the assignment to the two groups is random, again, there’s no mechanical correlation between treatment assignment and any of the characteristics of the groups themselves. Then we can know that any differences subsequent to the assignment are results of the treatment itself and not any sort of spurious correlations or selection biases.

Sally: Yes, I think listeners are probably familiar with this kind of experiment when you think about testing a drug or a vaccine, those kinds of clinical trials. And more and more economists have brought those models in for testing policies. And I think they gained a lot of attention recently because of the recent Nobel Prize, which highlighted how powerful these experiments can be for evaluating policies. And so I think that they gained a lot of attention from economists, they’re growing in their use, and it’s really thanks to partners like Ivy Tech that are willing to let us come in and test things in this way. Because, I think although people are very comfortable with the idea of testing a drug in a clinical trial, sometimes there’s discomfort with testing policies in this randomized way. And so we’re really grateful when we have partners who are willing to let us come in and try these new policies and implement them in this randomized way where some instructors receive incentives and some won’t.

John: And in a sense, we’re always testing things. It’s just, we don’t always measure the effect of it. When you something new in your class, you are doing an experiment. But unless you have a control group to compare it to, you can’t really assess whether the gain is due to that particular intervention or something else that was happening.

Sally: That’s exactly right and we really try to emphasize to people exactly that, that you’re always trying things, rolling out new policies or stopping one thing and doing it differently. And if you’re going to be making these changes, do it in a way where you can learn from them instead of just trying something, trying to step back and try to understand whether it worked or not. How do you know whether something is working or not unless you can compare it to a proper control group?

Andy: And just to emphasize the importance of this methodology, there’s a lot of policy that gets rolled out based on bad data and bad evidence. And so if you’re using a poorly designed experiment, or simply looking at correlational data and rolling out policy, what you could be doing might not be effective, it might be actively detrimental to students. But once you have this clear causal evidence, we can be really confident in the policies we roll out and understand the cost-benefit analysis of the policies prior to implementation.

Rebecca: Can you talk a little bit about the policy that you were testing in this particular experiment?

Andy: Yeah, so as we talked about, we wanted to roll out incentives for instructors based on student performance. And we base these incentives on objective, comprehensive exams for a variety of courses in a variety of departments. The exams are designed outside of the classroom in the sense that it was designed by deans and department heads and represented the types of material that they wanted the students to master by the end of the semester. So, those form the basis of our incentives that we would be giving to instructors. Now, we didn’t just want to offer incentives based on outcomes. We wanted these to be potentially as powerful as possible. So, we leveraged an approach that Sally’s researched in the past in a paper with Roland Fryer, John List, and Steve Levitt, where they looked at loss contracts.

Our incentives were actually such that every instructor would receive $50 for every student who passed the exam, and passing the exam is defined as receiving a 70% or higher on the exam.
So, we framed these as losses. And we delivered incentives at the beginning of the semester, as if half of the students in an instructor’s course had passed the exam. Now, this established it as sort of a target, but it also allowed us to leverage this idea of loss aversion, that instructors would value keeping money potentially more than they value gaining an equivalent amount of money. So, as the students progressed through the semester, at the end of the semester they would take this exam, we would have these objective evaluations for how many students passed the exam, and then we calculate their final payments. If their final payments exceeded this initial payment, they would receive additional payments. If their final payment was less than this initial payment, we would clawback some of that payment. And this was all explained at the outset of the experiment. And again, this sort of loss framing is leveraging a long line of research in behavioral economics, about how much more motivating it can be to face potential losses than equivalent gains.

Sally: Yeah, so just to give an example, if you have 20 students, and you get $50 per student who passes, half of your students passing, that would be $500. So we would send you a check for $500 at the beginning of the year, the beginning of the semester. At the end of the semester, if fewer than 10 of your students pass the exam, say only eight students pass the exam, you have to write us a check back for $100. If more than 10 of your students pass the exam, say 12 of your students pass the exam, then we send you a check for an additional hundred dollars. And we found in previous work that having this money in your bank account and knowing that you potentially could lose it if your students don’t pass the exam can be very motivating, compared with rewards that you only receive at the end of the semester.

Andy: Yeah. And one point about the logistics real quick is that these initial targets were based on enrollment as of what they call the census date. It’s not the drop deadline in the sense that you can’t drop afterwards, but it’s the deadline at which point dropping a course is no longer costless. All the students at this point in the course are enrolled sort of formally, and instructors will receive the upfront incentives based on that number of students. So, there’s multiple margins at which the instructors can influence student outcomes.

John: One thing I think that’s probably worth noting is that one advantage of doing it in a community college is that it’s much easier to have that standardized testing. I know in a lot of four-year colleges, faculty would object to having to assign an externally designrf exam at the end of the term, while in community colleges that type of standardization is much more common, which makes it a bit easier to design a study like this, I would think.

Sally: Yeah, that may be the case. Interestingly, I think, even for accreditation, for example, often you need to show that the test has certain questions on it. I know in large classes with many sections, they often write the exam together. The goal at Ivy Tech was to sort of create this bank of questions that every year tests would be drawn from, and I think moving classes over to that model is interesting. And there’s more openness to it than I thought. So, for example, when we started this study, I thought, “Oh, the only courses we’re going to get in this study are going to be math and maybe some science courses.” And what’s really interesting to me about this study, is unlike at the K-12 level, where it’s primarily focused on math and reading, we have a really wide range of courses. We have anatomy and physiology, art history, nursing, psychology, criminology, sociology, psychology. And so what it showed to me was that you can really get a wide range of courses into this kind of framework. And it doesn’t cover every element of the course. But, for example, in the English courses, one thing they were moving toward was evaluating the essays in a more objective way where you’d have two readers that would both rate the essays and compare ratings. And as colleges move toward those models, I think that this kind of framework will be more and more implementable.

John: It’s certainly good for assessment, and it’s certainly good for evaluating the effectiveness of innovations in instruction. There’s a lot to be said for it. I’m just thinking, at my college I know in many departments there’d be some objections to this. We used to have a standardized common final in the economics department where I teach and people objected to that for a long time, and we eventually moved away from it, but we are talking about doing something similar with at least some subset of questions that would be standard, for that sort of purpose.

Sally: Right. And I think always a concern about these kinds of studies is if the incentive is based on the objective part of the exam that can be tested and assessed in that way, does it take away from the other parts of the course that are more qualitative or more specific to each instructor? And so one thing we were really careful about in this study was to look at not just performance on the test, but how did students do in the class overall, how did they do on the other courses they were taking at the same time? How did they do in future coursework? And I think that’s really important that it’s not just all about teaching to this one assessment that’s going to be used for the incentive.

John: Given the strong findings on loss aversion in terms of how people find losses much more painful than gains of equivalent value, how did faculty react to that incentive structure? I believe you surveyed them on that early on, and then again later.

Andy: Yes, at the outset or at the baseline, the faculty did not like the idea of these incentives. This is both evidence-based where we have survey information and people were willing to sacrifice a rather large amount of money to have these contracts converted into contracts that were gain-based contracts that wouldn’t be paid out until the end of the semester. Anecdotally, this fits with my experiences, I went to explain these contracts. There was quite a bit of pushback in asking why these were framed in this way, and some people potentially wanting to approach them differently. Interestingly, this was very heterogeneous across departments. The accountants were like, Okay, well, I know what to do with this, [LAUGHTER] and put it away, and the psychologists were particularly upset because they knew exactly what we were doing. But, the data show that with experience, our treatment group, on average, has no preference between a loss contract and a gain contract, meaning that a large amount of this distrust of the contract could be attributable to just a lack of experience with this style of contract. And that as instructors gained more experience, they also gained a comfort level with the contracts as well.

John: I still wouldn’t rule out loss aversion as being a factor, but it is interesting that it gets reduced after they’ve experienced it.

Andy: Oh, absolutely. So, that’s not to say that loss aversion isn’t still a factor. But, as you gain experience with these contracts, maybe you start to appreciate the motivating qualities of loss aversion. So, maybe you understand that although these contracts cause you to work harder, or cause you to exert more effort around a certain goal, that by increasing that effort, you’re actually achieving greater outcomes for yourself. And if that’s the case, then they’re still motivating you through loss aversion, but you may not be as averse to the contracts as you were ex ante.

Sally: Yeah, so it may be that people are using them as a type of commitment contract where they know that yes, it will be painful while I’m in the contract, but it’s a way to motivate me to work harder, and I’ll walk home with more money than I would otherwise.

John: Just a couple of months ago, we did a podcast on commitment devices with Dean Karlan…

Sally: Oh nice.

John: …and we talked a little bit about that, and StickK.com, the site he created for that. Now, we’ve talked a little bit about the incentives for faculty, but you also introduce an incentive for students. Could you talk a little bit about that as well?

Andy: Yeah. So, on the student side, this was only in the spring semester. We rolled it out in the fall semester, where we had a pure control group and instructor incentives only. As we moved to the spring, we then cross randomized those two groups with student incentives. The students were incentivized with the following possibility. If they pass the exam, that is receive a 70% or higher, they would get a voucher for free tuition for a summer course. And this could be worth up to about $400 worth of tuition. So, now students are incentivized alongside the faculty. And we wanted to test whether 1. student incentives were effective and 2. if they made the instructor incentives even more effective.

Sally: Yes, we were interested in whether there’s complementarities between student incentives and instructor incentives. We knew from prior work that offering student incentives alone has, at best, modest effects. But, we thought that maybe if we put them in combination with instructor incentives, we could imagine the instructor saying to the students, “Look, guys, you guys have something at stake here too…” and it could create this positive cycle.

Rebecca: So can you tell us a little bit about the results?

Andy: That’s on page 22. [LAUGHTER] We found that the instructor incentives were really effective. They increased student outcomes by about 0.2 standard deviations on those exams. It’s a really nice effect in this literature. What’s also exciting is, suppose you don’t believe our tests or don’t like our tests, they also reduce course dropouts by 3.7 percentage points, which is about a 17% decline in the course dropout rate. They raised grades in the course by over a 10th of a standard deviation. And that’s even if you take out the effect of the exam itself, the course grades still go up by about a 10th of a standard deviation. And these positive results spill over into other courses. They complete other courses at higher rates, they accumulate more credits, and they even go on to transfer at higher rates. So, that’s in the faculty incentives or the instructor incentives branch of the study. When we look at the student incentives by themselves, we see essentially no effects on any key outcomes that we care about. When we look at them in combination, they actually don’t improve the impact of instructor incentives. If anything, we see a pretty small negative effect that wouldn’t be any significant difference at all. But, there simply doesn’t seem to be any impact of the student incentives. Now, this could be attributable to our specific student incentives. But, you’d have to believe essentially that they have either no value or very limited value to say that it’s just the fact that we’re incentivizing students in a very specific way.

John: When you first were talking about it, one of the things that struck me as… I think it was W.C. Fields who was talking about a contest where he said the first prize was a week in Philadelphia. Second prize was two weeks in Philadelphia. [LAUGHTER]

Sally: So, Andy and I are doing a separate study on summer school. And we do find that students do not want to attend school in the summer. But, interestingly, if we can get them to attend school in the summer, it has a really big impact on helping them graduate sooner. So, we’re really fascinated with understanding how we can address this aversion to summer school. But, that may be for another podcast. But,we agree that, I think that the incentive for students may not have been very motivating. I think just to return to the results about the instructor incentives, I think there’s some really interesting results there. First, something that’s unique to the college setting that you don’t find in the K-12 setting, is this really large problem with students enrolling in a course, paying for the course, and then not completing the course. So, about a quarter of students fail to complete courses that they’ve enrolled in and paid for. And this is a big struggle at community colleges. So, just increasing these rates of persistence in the course we think has a really large impact. And what it seems like is happening is instructor incentives get students to keep coming to their course, and so students go to their other classes as well. And so it has this really positive reinforcement effect on students completing all of their courses that they’re taking that semester. I think another really exciting result is that a year after our program ends, when we’ve stopped giving anybody incentives, you see these really large impacts on transfers to four-year schools… about a 20% increase in the rate of transferring into a four-year school, which we think is really exciting, which is the primary goal… as we talked about the primary goal of community college is to get these students to transfer to four-year schools. They really struggle with that. And so we see that this could have a really large impact.

John: And education is costly. And if we get more people finishing, the private and social returns, both go up significantly. And the cost of doing this is relatively low. It’s substantially less costly than the student intervention.

Sally: Yeah, it’s incredibly low, about $25 per student. One thing that’s interesting, again, about community colleges, because adjunct faculty are not paid very well, you can offer relatively cheap incentives that represent a significant bonus. So for these adjunct instructors, the average bonus represented a 20% increase on their baseline salary. Our adjuncts are making about $1,700 for a 16-week course. So, you can get a lot of bang for your buck with adjunct instructors, and we see the largest impact among adjunct instructors. Those are the instructors that really responded to the incentives. And adjunct instructors are increasingly becoming the model for schools, not just community colleges, but four-year schools as well. So, they represent about 50 to 80% of instructors at four-year and two-year schools, respectively. And that’s on the rise. So we expect that to increase in the future.

John: And that’s another topic we actually address in a podcast that was released on December 18.

Andy: So, I think the adjunct effect is also one that’s worth emphasizing, just because of the model of using adjunct faculty or increasingly using adjunct faculty is unsustainable at the current pay rates. So, if we think about these contracts as being more flexible as these adjunct instructors are more used to working on temporary contracts, if it turns out to be the case that you can’t continue to pay people such small amounts for so much work, then how do you design contracts in the future that can maximize student outcomes? So, if we’re in a world where we know we have to redesign these contracts, what we wanted to be able to do with this study is say, “This is a way you can redesign the contracts and achieve the outcomes that you hope to achieve.”

John: That works well, when the test is administered or designed externally. There would be some incentive issues, though, if the instructors had more control over the test or that assessment of how well their students did, I would think.

Andy: Yeah, absolutely. And that was at the front of our minds while we were designing the study was, “Are we not simply motivating people to either teach to the test, or to lie to us outright,” and based on the way the exams were designed, these are both objective and for the most part, externally graded. So, it’s still possible, for example, for a teacher to just erase answers and write in the correct answer if they wanted. But, there’s a certain point at which you have to start trusting your subjects, that they’re not attempting to deceive you. And so we kept that sort of in mind as we were thinking about how to design the study.

Rebecca: Did you have any feedback from faculty at the end of the study, when they discovered that your incentive worked, for example?

Andy: So, we have been in touch with our partner in the administration, we haven’t been in touch with the faculty themselves with our working paper or now the forthcoming paper. So, we hadn’t gotten feedback at that point. We did get feedback in the process of the study that is like at the end of the fall semester, and at the end of the spring semester, and just like the preferences for these contracts, the feedback was, of course, not universally positive. But for the most part, the majority of people appreciated the extra money. And I guess this is something that we haven’t emphasized yet, but we didn’t really change anyone’s contract, they were still operating under the existing contracts. And these served as a bonus on top of those contracts. So, there was very little room to think of these as sort of a really detrimental change toward your contract. Because the worst-case scenario is that you were under the exact same contract as you were previously.

John: If everybody failed, or if everybody came in below the threshold.

Andy: If literally zero percent of your students were able to pass this exam, you were in the same world you were previously.

Sally: We had high rates of sign up in the fall, and then even in the spring semester, there were people in the fall who hadn’t signed up that chose to sign up when they had a chance again, and all but one instructor continued the study from the fall to the spring. So, I think that instructors did like participating and we generally got positive feedback.

John: So, you got really strong results for the incentives for instructors with larger results for the lower-paid instructors… for adjuncts. Was there any evidence of the mechanism by which this affected student outcomes?

Andy: So, we look into mechanisms in two ways. One, we look at self reports of time use. And we really don’t see any significant differences between the treatment and control groups. So nothing that would clearly identify a change in behavior. Now we have one caveat to this, and that’s that when we put the time-use survey out, we limited each activity to 16 hours, not thinking how many of our instructors might spend more than 16 hours on a given activity. And that was made pretty obvious with the outside-work option. And so it is possible that we are top coded there and unable to differentiate between the two. And we also look at student evaluations, and we don’t see any significant differences between the way students evaluate instructors that were in the treatment versus the control group. So, we don’t really see a specific mechanism that’s driving these differences in student outcomes. And if we really wanted to try to isolate these things, we would need to maybe have some better or more objective data about instructor practices or a more fine-grained approach to looking at time use, I think.

John: That could be an interesting follow-up study.

Sally: Yeah, I think now that we’ve shown that these incentives work and can be very powerful, getting inside the black box of the mechanisms is our next step. And we’re currently working with an online university where everything instructors do and everything students do is passively recorded because they’re interacting online. And we think that will give us more fine-grained data. If you think about it… If I asked you last week, “How many hours did you spend on email? How many hours did you spend prepping your course?” It’s really hard to recall that without a lot of noise in there. And I think the other thing we discovered after presenting the results, talking to instructors, talking to administrators, talking to other people who work in this area, is that a lot of it might not be captured by time spent. Some of it might be… you learn the names of the students in your class… when you saw a student in your class who was on their phone, instead of letting them be on their phone, you said, “Please put your phone away, please close your laptop.” And so it might be much more subtle practices that we need to either observe classrooms or do focus groups or really get more qualitative data. And that’s something we’re really interested in doing.

John: Because it could be motivational, it could be that instructors who know that they’re going to get paid more might put a little more effort into those things that may not be captured by those measures. One hypothesis I was thinking is that it could also be that the existence of the incentives might perhaps encourage people to develop a growth mindset. And there’s a lot of evidence that faculty that have a growth mindset tend to have students that do better, or at least that have narrower performance gaps.

Sally: That would be really interesting, I think, for evaluating. We’re already surveying instructors at baseline and throughout and so we could see if the characteristics of the instructors change or their attitudes. We do ask them their attitudes about teaching and their view of students. For instance, questions like “most of my students achievement is determined by background” or “I’m able, with enough effort, to change how my students achieve.” And so we can look more closely at those questions. We use them mainly as baseline questions to characterize teachers about their attitudes. I don’t think we’ve looked to see whether their attitudes change. So that might be an interesting approach, we should take a look at those data.

Andy: One other mechanism that’s opened up by our incentives is that what we’re doing is essentially giving people a big influx of cash at the beginning of the semester. And so this could also just open up resources or capacity constraints that they had without these incentives. So for example, you could imagine someone who’s also working part time, who now gets a check at the beginning of the semester based on all of these potential student gains and doesn’t have to spend as much time working in their other job. Things like that could be potential mechanisms and could also explain why adjunct faculty have this really large differential effect. But again, we don’t have that hard data. And so it’s something that’s really interesting to us. But, unfortunately, not cleanly identified by our data.

Sally: One thing that we received is an unsolicited text message exchange between an instructor and their student, which I thought was interesting, because my students don’t have my cell phone number. But, things like that, giving out your number, exchanging text messages, the sort of individual support that I think, especially for community college students who may be less connected to campus, less connected to the community, could be really important. And so we want to think more about that sort of sense of connection to the community, to your instructor, to your fellow students.

Rebecca: I’m really excited to find out what your next round of studies reveals, because you have interesting directions that you can go in right now. And then really valuable information that you’ve already discovered.

Sally: Yeah, I think another interesting direction that we’re very interested in… is Andy’s talked about this model of being sustainable, especially as schools move more and more over to this adjunct model. So, another thing we want to understand is if a school offers these kinds of incentives, what kinds of people do you attract? Are you better able to retain your high quality instructors? Do you recruit higher quality instructors? So, that’s another question we’d really like to answer in future studies.

John: Because you’re offering higher pay to the faculty that are more effective, which could have an interesting self-selection effect on the faculty composition.

Sally: Exactly.

Andy: Yeah. And if anything, our results suggest that it takes a little bit of experience with these contracts to really appreciate them. So, moving to a model where you have these types of contracts, there might be a transition period where it was challenging before it became something that people understood as beneficial to themselves.

Rebecca: And not just to themselves, to the bigger educational community. Yeah.
So, we always wrap up by asking, what’s next?

Andy: I can talk about a project Sally and I are working on right now, as we talked about earlier, summer enrollment was seen as this potential mechanism to drive student success. And so we did a really simple experiment where we just randomly assigned people to receive a free summer course and then tracked their outcomes for the two years subsequent to that summer course. So, we’re wrapping up a working paper on that. And it looks like summer has this really nice long-term effect that would be kind of hidden in the short-term data because of the fact that you don’t see impacts on retention between spring and fall. But, you do see these impacts on credit accumulation in the short run and then graduation and transfers over these shorter windows as well.

Sally: So, I think as behavioral economists, something that Andy and I are really interested in is the intersection between preferences for contracts, preferences to attend in the summer, and the impact of those kinds of contracts on your future outcome. For example, we find that instructors don’t really like these loss contracts, but they perform really well under them. We find that students don’t really want a summer scholarship, but it has a really big impact on their future outcomes. And so trying to understand this intersection of your preferences for the here and now, and how these things may or may not translate into your future outcomes, is something that I think will be really interesting for future research.

John: This is a topic we keep coming back to in other contexts, that in terms of student metacognition, that the approaches that we know are most effective for learning are the things that students tend to value the least, and tend to perceive as being less important. So this is a pretty general problem, I think.

Andy: And isn’t there data showing how students give worse evaluations to teachers that cause greater amounts of learning?

John: There was that Harvard study a few months ago in a physics program there, where they found that students believed active learning to be less effective in terms of their learning. And yet the students who were exposed to active learning techniques ended up with larger learning gains. And that was also a randomized control trial.

Andy: Yeah.

Rebecca: People just don’t know what’s good for them.

Sally: But it’s hard, because I was trained at the University of Chicago. I am a behavioral economist, but I’m also a University of Chicago economist. And I believe in respecting people’s preferences and their choices. And so we have to be very careful about how to sort of take these complex and think about how to translate them into policy.

John: In terms of gentle nudges that work well.

Rebecca: Well thank you so much for joining us, it’s been really interesting.

John: It’s always better when there’s economists on.

Rebecca: I’m always outnumbered.

John: This has been fascinating, thank you.

Andy: Thank you.

Sally: Thank you so much for having us.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

113. Podcasting for Professional Development

This is a live recording of a session in which we discussed podcasting for professional development on November 21, 2019 at the Online Learning Consortium’s Accelerate Conference. This episode provides a behind-the-scenes look at the Tea for Teaching podcast and an introduction to how to start your own podcast.

Show Notes

Transcript

Rebecca: Today we’re recording live from Disney World at the OLC Accelerate Conference. Today’s episode is a behind the scenes look at the Tea for Teaching podcast and an introduction to how to start your own podcast.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together, we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

John: Today’s teas are:

Rebecca: Well, I was supposed to have my favorite tea, but my co-host forgot to bring my tea. So I have Awake tea.

John: I specifically said I would be providing the tea. I thought she would be packing it. In the envelope you’ve received, we would have had two teas, but there was a communications gap. But you do have one tea which is one of my favorites. It’s ginger peach black tea from Tea Republic, which is what I’m drinking today.

Rebecca: Like our podcast, we want this session to be conversational. So we encourage you to ask questions throughout the session, rather than leaving them at the end. Ask for clarifications, ask for insider knowledge, or share your own perspective. Judie Littlejohn, who is wearing the Minnie Mouse ears, is assisting us today and has a microphone available to you to ask questions. We ask that before you ask a question though, if you can state your name, and then she’ll also collect your actual written name so we don’t spell it incorrectly in our transcript. Don’t worry, we’ll edit the episode so that we all sound great because we do heavy editing. So, please help us make this session and this podcast episode really useful by participating throughout. And we have a link at the very end to the digital resource and all the things that we’re going to talk about in much more detail if you want to visit that later.

John: So, we thought we’d talk a little bit about how we got started. We’ve been running a teaching center at Oswego for a while. We’ve been working together for I think, five years or so. A couple years ago, we both came to the idea that a podcast might be rather effective. And we both been listening to podcast for a number of reasons. I travel back and forth every summer to Duke and I do a lot of things in SUNY. So I’m driving across the state quite a bit. And podcasts were convenient way of just keeping myself entertained, but also doing some professional development work while driving.

Rebecca: And the summer before we actually decided that we were going to do this podcast in the first place. I had had a baby and I was desperately looking for intellectual stimulation. So, I spent a really long time and many hours listening to every kind of possible podcast, I listened to stories, I listened to research, I listened to teaching podcasts, I listened to Teaching in Higher Ed, Design Education Today, and many, many others. That was all I was doing day in and day out because I couldn’t do anything else having two hands full.

John: At our teaching center, we normally offer about 300 workshops per year. But we noted that a lot of faculty weren’t able to attend because of time conflicts in their schedules, because they were adjuncts working at multiple institutions, or they were commuting over large distances. While we record these workshops as videos, busy faculty often would find it difficult to sit down at a computer and watch a recording of the workshops.

Rebecca: So, when we came back in the fall of 2017, we both were trying to explore ways to address that issue. And we said, well, what about a podcast? And we thought we’d experiment. So, this was all meant to be a small little experiment. The small little experiment started with needing a brand. I’m a designer, a graphic designer… So, everything needs a brand. You got to start there. That’s the only way things can get done. It was kind of challenging to come up with a name.

John: We did actually asked for suggestions from our faculty. And they came up with maybe six or seven names, none of which we both liked.

Rebecca: We had these roundtable sessions on a regular basis that were really popular called Tea for Teaching and at one point, one of our colleagues said, “Well, why don’t you just use that?” And we decided to do that. it was a format that I had brought with me from another institution. So, it’s a name that has traveled with me a bit. So, we decided to do that. But, of course, now that means we’ve had to rebrand our roundtable discussions on campus.

John: This picture up here, and we’ll include a link to that in the show notes [included in slide show in the show notes file], shows a table in our conference room that we use when we did the tea for teaching sessions. And we’ve got probably a couple hundred different types of tea there.

Rebecca: Just a small selection, in case you’re not sure what you might want. For some, there’s too many choices. You spend your whole time trying to decide what it is that you’re going to drink during the discussion. We checked immediately and found out that teaforteaching.com was available… we got it… and then of course, we failed to check all of our social media, and it was not available. So, we use our personal Twitter handles and teaforteaching.com. So, this is a memorable lesson: that you need to make sure that you check all platforms for the name that you’re choosing for something ahead of time. But, of course, we were just doing a little experiment, so it wasn’t going to be a big deal. We decided from the start that we’re going to use an interview format and that meant that we needed to have guests. So, we started initially by reaching out to faculty that were on our campus that we knew that we’re doing interesting things. And, specifically, we started with our teaching award winner, and that was Casey Raymond.

John: He recorded a couple podcast with us. But, our very second guest on the show was Judie Littlejohn, who we knew, but she drove to campus. We were a little nervous about doing something online at first… we were just getting started. So, she visited us and was our second guest. And then our first guest that we didn’t know was three months later, actually, Doug McKee, who’s a host of the Teach Better podcast. He’s also an economist at Cornell, and I had followed him on Twitter, and I saw him post something about the Active Learning Initiative. And so he was on episode 12. And Judie was episode 2.

Rebecca: I think we have another one of our guests in the audience today.

John: And we also have Michelle Miller, who’s been on now for four podcasts as of this week. The most recent one just came out on Wednesday, which was on Neuromyths and Evidence-Based Practices. actually an OLC-sponsored study that originated at OLC a few years ago.

Rebecca: So we’ve had a number of guests that we’ve selected from articles and books in the Chronicle or what have you that we found or tweets that we found interesting and then we pursued… and as our guest list has evolved, we’ve been really excited that we’ve been able to highlight our local faculty in the mix. So, we have both local, regional, national, and international guests. And it’s really nice because we’re able to elevate our local faculty, which was important to us from the beginning. This is also a moment just to remind everyone that this is supposed to be interactive, and no one has asked a question yet.

John: If anyone has any questions at any time, just raise your hand and Judie will get the microphone as close as she can. It’s only a 50 foot cable. If you’re further than that, you can come up to the microphone.

Rebecca: Make sure you ask questions as it relates to the topics that we have a nice dialogue during the actual episode that’s released.

STEVEN BORAWSKI:

Steven Borawski. I’m from Tiffin University. I’ll be the first brave soul, I guess. I just got interested in podcasting… been listening forever. And one of my kids wanted to make one. And so I’m kind of curious, when you started to realize you had something more than just an experiment… when did it get kind of serious for you guys?

John: Within the first month or so when the number of downloads went from being in the dozens to being in the hundreds? We were kind of surprised by that. And by the end of the first month, we had downloads in I think about 35 or 40 countries. And that was not anything we anticipated. We expected originally it would be mostly people within our institution, or within the SUNY system, because we did have people from other SUNY campuses on at first as well.

Rebecca: I think it was also a moment of success when we had some faculty who hadn’t come to any professional development workshops before, who came up to us and said that they had listened to an episode and found it really helpful. As soon as we had one of those interactions it’s like “Big win… we need to keep doing this.” And we both had those kinds of experiences multiple times over. So, it’s been really rewarding in that respect, because it was really for our own local campus is why we did this. It wasn’t to have a bigger audience, although we have a bigger audience than that.

John: And we’re thrilled by that. And that makes it easier to get new guests. And we didn’t want to invite too many guests who were nationally known until we had a reasonable size audience. And once we started getting some nationally and internationally known guests, we felt much more comfortable asking people. But, one of the things that’s really amazed us is how, when we’ve asked people, they nearly always have said yes.

Rebecca: Which is great. [LAUGHTER] From the start, we mentioned that this was going to be an experiment. So, our initial recording studio was just our office… there was just this little tiny table in the corner of our office… We could close the office door. We put a little sign on the outside that says recording in progress. There was a big window and people could kind of see in and see what was happening. And early on we were doing a lot of our recordings, coincidentally I think, in the morning, it wasn’t intentional. And so we didn’t realize that our office is on a major thoroughfare, apparently. So it became really obvious in our recording with Robin DeRosa, which we recorded in the early evening after she gave a workshop on our campus. So, we heard noises like this: [sound of toilet flushing]. The women’s bathroom is located behind our office. [Blender sound] Our office is located adjacent to the cafe. [LAUGHTER]

John: …which is a Starbucks with a grinder and a blender and other noisy things there.

Rebecca: [Sound of a noisy cart rolling past] …that apparently receives deliveries at the exact same time we were recording. So luckily, Robin has a great sense of humor. [LAUGHTER] Because we had to stop every five seconds to allow for all of those noises to occur. So we weren’t getting those constantly in the background. And we were laughing pretty hard by the end because it was getting quite ridiculous.

John: There was one time where Rebecca started a sentence about three or four times and at no point did she get the whole sentence out. And it took me probably an hour to rebuild it from the different fragments of sentences into something that sounded like a complete sentence. And that podcast particularly it took about an hour and a half to record and it became about a 38 minute podcast once we removed all those second starts and other noises. So, that was the problem that we had. One of the first things we did is we made sure that the microphones we use were dynamic mics, rather than condenser mics. They’re not powered, they don’t pick up noises as well from further distances. They’re based entirely on proximity. They’re based on the pressure of the sound wave. So, using a dynamic mic is a really good thing to do if you’re going to set up a podcast and record in the sort of environment we normally have. There are other mics that work really well in a studio and capture sound much more accurately, but we don’t really want all that sound to be captured from our office.

Rebecca: So, we have a small upgrade in our location. And I mean small. We’re in a borrowed space for our recordings, which is an old recording booth for an actual TV station. So, it’s just like a teeny, tiny little closet essentially, that we have strung up all kinds of fabric and things on the walls so that it absorbs some sound.

And there’s a couple of things that we do to make people a little more comfortable. We usually start with a little informal chatter. And literally, it’s that just a little informal conversation to get people to feel a little more comfortable. Most of our guests have never been recorded before, so they’re pretty nervous. And we have now noticed that there’s all kinds of nervous tics that people have. Our favorite one is the rubbing of the pants. [Sound of hand rubbing against fabric] So, it’s like this on your leg constantly while you’re talking, which is really loud when you’re recording. We try to remind folks about some of those nervous habits and just get them to feel comfortable.

John: And the chairs that we borrowed for this room. squeak whenever people turn or fidget and when people are really nervous they turn and fidget a lot. So, we do a fair amount of work on the editing there. [LAUGHTER]

JUDIE: We have a question.

CLIFFORD STUMME: My name is Cliff. I do a little bit of podcasting and online content creation myself. And usually the success metric for that is how much ad revenue am I creating… how much are sponsorships paying? When you guys are working on this, the first thing that comes into my mind is there’s got to be a lot of like professional development or career benefits that go along with it. And maybe this is something you’re going to be talking about later. But, I’d really like to know what kind of personal benefits that you’ve seen from it, whether maybe opportunities to speak or whether you guys just do it for the love of helping the people who listen.

John: We started doing this primarily as an alternative to some of our workshops, although we haven’t really cut back on workshops that much. And, mostly, it’s just been a lot of fun that normally when we do workshops, and we have maybe 10 or 15 people there and we talk about ways of implementing various strategies. We get to hear little bits and snippets of what people are doing. When we’re dealing with a podcast, we sit down and record with them, typically for an hour or so. Sometimes it’s a little bit less, but we get to explore what they’re doing in much more depth.

Rebecca: I think it’s a really great opportunity to get to know so many really great researchers and teachers, both on our campus and nationally. And it’s been a really great opportunity to hear what people are doing. And, I think one of the benefits but also maybe one of the problems with doing this podcast is we have all these really great ideas of things that we want to do in our classes and no time to do it. Because, every time we interview someone, we think, “Oh, wow, let’s do that too.” And I think we’re in a constant cycle of redevelopment, which is good, but at the same time, I get like maybe a little too excited about all the cool stuff we hear about.

John: Yeah, and I do the same. I had students write a textbook last time based on hearing about open pedagogy, and quite a few other projects like that.

KIM BENOWSKI: Hi. I hope you can hear me, I hate talking on microphone. So, I probably would not be the one podcasting, but I work with a media team and whatnot at my university. I’m Kim Benowski from Cornell, and a lot of the media work that I’ve done with faculty… and there’s many faculty that want to come prepared. So, they often want to pre-write a script. They want all the questions and such. I’m wondering how you deal with that. Because in my experience, when we’re making videos, the unscripted is often so much better, more authentic and genuine, especially for a podcast. I was wondering how you handle that and if there are certain things that you do to coach your faculty in advance like, “When you come expect X, Y, and Z.” I ask them not to prepare, if you want to bring bullet points, that’s great, but how do you apply this in the podcast?

John: That’s a really good question. What we do, basically, is we share a Google Doc with them with questions that we’d like to address and we leave it editable so they can modify that if there are things they’d like to emphasize that. We tell them we we want to keep it conversational. Many times people bring notes and sometimes they start reading from the script. And it doesn’t sound quite as good. So, we discourage that. And if they start reading from a script, what do you normally do?

Rebecca: Then I ask a really like, bizarre question that’s not on the script. That’s my job.

John: There have also been a few times when we said “That sounded like you were just reading from the script. Let’s redo that.” One of the things we tend to do to put people at ease, though, is we tell them that because this isn’t live, we edit it thoroughly. And if there’s something you said that didn’t sound good, just say it again, just start over. And we’ve had podcasts that were an hour and a half or an hour and 40 minutes, edited down to 38 minutes with the start overs removed. And we’re not perfect in terms of our presentations, and many of our guests… it’s the first time they’ve done this type of thing. So there’s lots of arms, there’s lots of breathing noises. There’s lots of other things. There’s people who will say “like,” “you know,” “sort of,” “kinda like” all the time, and we just simply remove all that before it goes out.

Rebecca: Yeah, and in case you haven’t noticed we’re not very polished. But, when you listen to the episode that will come out. It’ll sound way more polished.

John: …and shorter.

Rebecca: John’s really good at doing that. But also, if you don’t like talking and being recorded, neither do I. I’m actually quite introverted and really hate this. But, it’s possible you can do it.

TRACY MENDOLIA-MOORE: Tracy with Western University. My question is: “How much time are you investing after the podcast… in the editing? Like, on average… I’m sure there’s more or less, but on average, how much time are you investing in that?

Rebecca: Too much.

John: Too much. On average, it’s probably about 20 hours a podcast.

JUDIE: Would you repeat that, please?

John: On average, it’s probably about 20 hours a podcast.

Rebecca: But, that’s because John is like obsessive. The average person would never edit it to that extreme.

John: But that also includes generating the transcript and cleaning up the transcript as well.

Rebecca: While we’re getting over to the next question, do you want to talk a little bit about our setup and how to deal with some of the noise?

John: If there’s basic noises like a room hum or static, there’s noise filters.

Rebecca: What about people who pop their Ps all the time, john, like your co-host.

John: Yeah… So, if you look at the microphone there, you notice that little thing at the top, that’s a pop filter. so that when people…

Rebecca: …pop their Ps…

John: …like that directly into the microphone, that cuts it down a little bit. And the rest is just cutting out a little bit of the initial tone and dampening it down and softening it a bit.

Rebecca: And if you want to annoy your co-hosts, you make sure that you have lots of annoying sentences that have a lot of pops in them.

John: And another thing we were having problems with at times is when the microphones were on a table like this, people would tap the table or bump the table or drop things on it. So, we have shockmount on the microphone, so that they’re all suspended basically in elastic.

TAYLOR KENDRICK: Hi, this is Taylor Kendrick from Samford University. Thank you all for hosting. I was curious about when your very first podcast went out. You said you had a very good response. What were you doing for getting the word out? “Hey, we’re here.” So someone would listen.

John: We shared it on our local campus email, we’ve got about 1200 people on our email list and we also shared it in a SUNY-wide Facebook Workplace group… so that all of SUNY has access to that. And it also got shared by some people in SUNY, who put it out in news releases, and so on. And from there, it just sort of spread. We’ve posted on Twitter, and we have a Facebook group. And so we shared it on social media, and it just gradually kept getting bigger and bigger.

Rebecca: I mean it started off a little slow, but it has grown pretty rapidly since then. So, we talked a little bit about guests who have never been recorded before and don’t always know how to have their space setup. So, John, do you want to talk a little bit about some of the things that we do for that?

John: Sure. We do send out suggestions to people to use the best mic they have available and to try to make sure they have a solid network connection. We remind people not to be rustling papers when they’re talking, or if they’re using the laptop microphone, which we discourage… but if they’re using their laptop microphone, we ask them not to be typing or scrolling on the laptop when they’re doing it because then you get this dragging sound and so… And some guests, we had to remind 10 or 12 times to do that, because they put some notes up on the computer, and they were scrolling with a touchpad…

Rebecca: …you mean I should do that right now?

John: And that would be an example. But basically, there are other issues. We had a podcast not too long ago where we had someone who was outside, we had someone else who was in a new apartment. So, the person outside we were getting wind gusts coming in and a bird behind them. And the person who had just moved into a new apartment ended up having bare walls and a bare floor and it was like an echo chamber. So, it was an interesting challenge to clean all of that up. There are some nice de-reverb filters you can use to do some of that.

Rebecca: So, we try to remind guests, especially if they’re remote to find a space that maybe has carpeting or some other absorbing materials around to make the space a little bit better. And then also to preferably have a microphone that’s not attached to their headphones so that we don’t end up carrying them through their headphones.

PIERRE BORQUE: I have three questions. I’ll ask them. I can ask them to answer and then the second one you can answer My name is Pierre Borque. I’m from the École de Technologie Supérieure of the University of Quebec provincial system in Montreal. So, my first question is: Do you have any idea of how large your audience is and how do you know that?

John: We get downloads statistics, and we’re generally getting about 3000 downloads a month right now… or a little over, I think the last month it was 3300 or so. So, it’s grown quite a bit.

Rebecca: We also have pretty good traffic on the website, too, but I don’t have the latest stats on that.

PIERRE: My second question is, how many podcasts have you done?

John: We just released our 108th, which is Michelle’s podcast on Neuromyths…

PIERRE: So, how do you generate new content? Are you sort of… the same subjects keep sort of coming back? What’s your strategy for generating new and interesting content?

Rebecca: We find people that we want to talk to. [LAUGHTER]

John: And we also look at Twitter to see what people are posting about. When new books come out, ee look at that… we look at reviews… we look at The Chronicle to see interesting studies that people have done or interesting books that are being posted or talked about or interesting issues. We also look at Inside Higher Ed, and we’re getting more word of mouth where people are recommending people as possible guests to us as well.

Rebecca: I’m pretty sure our attendance at this conference was a scouting adventure. [LAUGHTER]

PIERRE: My third question is for our administrators of which I used to be one up till very recently. Have you identified to your administrators any impact or any benefits in your own institution of hosting this podcast? Has it helped students, faculty, some specific benefits that you’ve cited for your own administrators by hosting this impact? I would like to see some examples.

John: We hear from lots of people about changes they’ve made in their classes and they sometimes talk about how it’s impacted their teaching. The evidence on that in terms of the feedback cycle is not as complete as we’d like. But that’s true with most of the workshops that we’ve been doing. I think the main thing is we’re reaching faculty who we otherwise hadn’t been reaching. And that’s also often times has made them come in for other workshops when they can..

Rebecca: It’s a little challenging to breakdown that specific data from the kind of stats that we can get from the each episode because it just kind of regional data. So it doesn’t tell you: “This is a person from SUNY-Oswego.” But, we’re able to make some guesses about where they’re coming from.

John: We’ve had at least two or three people said that they became interested in doing open pedagogy project because of the podcast we did with Robin DeRosa, and they’re doing them this semester. Actually, two or three people mentioned that specifically, but we have now nine new faculty doing open pedagogy project as part of a SUNY-wide grant. But, I think that podcast inspired at least some of them to consider doing that. And I know Michelle’s podcast on retrieval practice has induced more people to consider doing more work with retrieval practice in various forms. And people do come up to us and tell us about that or send us emails about that. And we do see it in other workshops, where they’re talking about how they’ve implemented some of these things.

Rebecca: And I think our administration really values it. I know that our Provost as well as our Diversity Officer have mentioned it to faculty that they’re considering hiring and those that are newly hired. We even, at new faculty orientation, had quite a few faculty come up to us. It was like, “Oh, we’re so glad to finally meet you.” Because they had been listening, which is a really bizarre experience, right? [LAUGHTER] Wait, you’ve been listening? What do you mean? Who are you? [LAUGHTER]

John: And it’s sometimes really strange at a conference when people come up and start talking to you about something. And then it will be obvious that they’re talking about a podcast episode. They listen, and they feel they know us because they’ve been listening to us for 100 hours or more.

Rebecca: …and our voices are familiar., yeah.

MICHELLE BAKER: Hi, my name is Michelle. I’m from Penn State University. And I’m wondering, on the technical side of things, what software do you use for all of the editing that you’ve been talking about? And my follow up question to that is you’ve talked a lot about reducing sound. I’m wondering do you also add sound effects and music and if you do, where do you find that?

John: In terms of adding it, and that’s an easy one. We licensed some music from one of the sites that that provides these sources. And that’s recorded and just fixed. And we just do that. That’s the intro and outro. In terms of editing, we use Adobe Audition for most of our editing. And we do have a campus license for that. In terms of things related to software. When we have remote guests, it’s a little more challenging, which is why we postponed that a little bit when we first started, because our network was not always stable, and our guests often don’t have stable network. So, we do end up with some drops of data or sometimes people disconnecting, or the quality of the voice just fades away. So, what we used to do, up until about three or four weeks ago, we were using Zoom for Voice over IP. And on our local side, we were using Audio Hijack, so that our voices would be recorded in our local mixer directly from our microphones, but we’d take the incoming voice from our guests as a separate channel so we’d be able to edit our guests and us separately, so we normally sounded pretty clean (unless there were carts going by or toilets flushing), but the guests audio varied a bit depending on network speed and noise and other issues. We just recently moved to SquadCast. And the first time we used it was with a podcast with Kristen Betts and Michelle. What that does basically…

Rebecca: Thanks for being a guinea pig.

John: …it’s a double-ender recording session. It’s a web based app. It records each end of the podcast separately to the local computer and streams it in the background. So, you get the highest quality audio from each end of the podcast. And you can have up to, I believe four different sites connecting at once. We used it with 3 in that one. So, Kristin Betts was on one channel, Michelle was on another, and we were on a third,

Rebecca: It starts getting a little complicated when you have more than four people. It’s like hard to follow who’s talking. We’ve done a couple where there’s a few more guests than that, and it’s really challenging to edit. It’s challenging to listen to. So, I think for us kind of the max. As we mentioned earlier, we started off as an experiment. We’re well over 100 episodes now. So, clearly, it’s not an experiment anymore. It’s a thing we do. [LAUGHTER] And probably no one’s more surprised about this than I am. And John just keeps saying “It’s growing, it’s growing, we got to keep doing it.”

John: And we have had pretty steady growth. Each month it’s gone up. We’ve been over 3000 for the last four months. And we’re certainly on track for that, again, we have listeners in every state. I remember, there was that last state, it took about five or six months to get to but we finally got it. I think it was… Arkansas.

Rebecca: It was. Yeah.

John: Arkansas was the last state. And when we finally got that person, it was great. And now we see a teaching center there actually has this on a website. We’ve got notification of that recently. So we’re now getting a reasonable number of downloads from every state and we have over 100 countries.

Rebecca: Yep. And I think a lot of the evidence that we had that we mentioned earlier is really anecdotal and when faculty reach out to us or send us messages, we try to keep all of those stash those stories and things that we have those that we can report back on. We also are really proud that from the very, very beginning, our podcast has been accessible… meaning that we made sure that the website itself is accessible but also that we have had transcripts since the very first episode. We think that’s really important and we’ve maintained that and we continually improve the site and do things to increase the accessibility. Originally, we were using YouTube for those transcripts, and then a lot of human editing from there. But now we’re using otter.ai, which actually comes with some capitalization and punctuation. [LAUGHTER]

John: Yeah, because YouTube was really good in terms of its accuracy. But you just got a stream of words, and it didn’t identify the speakers. It didn’t put in punctuation or capitalization, and it was a real pain, just adding those things. Otter.ai is slightly less accurate, but putting in all the punctuation and putting in the capitalization… and identifying speakers. It recognizes my voice, Rebecca still has to be trained again, so that it will recognize her. But our guests come in as an identified speaker 1, unidentified speaker 2, and it’s really easy to clean that up. It makes it much easier and it’s probably cut 30% off of the transcript editing time.

Rebecca: and I think otter.ai is free for 50 hours a month.

John: 50 hours a month… per Gmail account. And if you have multiple Gmail accounts that makes it pretty large,

LUVON HUDSON: Hi. Luvon Hudson from Central Piedmont Community College. And my question is simply… I don’t know if you can recommend, or if you have advice around, maybe like a sweet spot, if there is such thing, for the length of a podcast, a lot of my faculty don’t really like to sit long. So, I don’t know if that translates into the same thing for podcasts as well. I’ve heard you say 38 minutes twice. I don’t know if that’s maybe your sweet spot, because I do know that transcription and things like that kind of add into that backend work. So, do you recommend that or is that even a factor? Is it more just around the content and the quality of what you’re talking about?

John: It varies a bit with that, but we generally schedule hour-long recording sessions. Sometimes, they go a little bit longer, but we try to keep the actual recording to an hour, including some conversation in the beginning, some setup and so forth. Most of our episodes are between 30 and 40 minutes. We do have longer ones, but the longer ones in general had a lot of really rich content that we just couldn’t cut or we wouldn’t want to cut. I don’t know what the optimal length is. And that’s one of the questions, actually, in the survey. We’re curious. But I know, I tend to prefer not to listen to podcasts that go over an hour. For me, most of the podcasts I enjoy the most are between 20 and 50 minutes, because that’s nice for a reasonable commute.

Rebecca: I’d say we have a lot of faculty that commute and they come from two different cities into Oswego and the shorter one is like a 30 or 40 minute commute. So trying to keep to that one I think is key for our local audience.

John: I should say that one thing that I do is I listened to all my podcasts initially at 1.5 times and now I listen to them at double speed. And it’s really a little disconcerting when we talk to someone who I hear on other podcasts, and all of a sudden they’re really slow speakers because I’ve gotten used to hearing him at double speed.

MARIE BAMAS: Hi, I’m Maria from Middle Tennessee State University. When you guys started really getting into this and refining it and making what you had started out with better in terms of like the software or the hardware and your content? Did you go to other conferences? Did you do most of your research on the interwebs? I mean, like, how did you refine it and get all the information and kind of like, make it as best as it could be like what it is today?

John: That’s a good question. All the above, except we haven’t really had that much formal training on this. Mostly, if I’m noticing a noise problem when I’m editing, I just do a search on the web, or I look at the help in Adobe Audition, or I’ll look at some of the LinkedIn learning discussions of how to do these things. There are so many YouTube videos on removing pops and clicks and other things. And there’s YouTube videos on pretty much every type of thing and I use those a lot when I’m trying to deal with a different problem that I haven’t dealt with before. Adobe Audition keeps getting better. One of the things that happens is, we mentioned the first podcast that were relatively short. The very first one, I think it probably took maybe an hour to edit because I wasn’t hearing a lot of the noise there. One of the things that happens is, the more you edit these things, the more you notice. The noise in the office we had lived with for years. So, it was just background noise that we didn’t notice. But when you start hearing it from headphones, and you start using better headphones, you can hear that noise much more clearly. And so you just become more adept at observing things and cleaning them up.

Rebecca: I think in terms of content, we got some yeses from people that we were surprised said yes. So, then we just started asking more people that seemed like a stretch, and then we kept getting yeses. So now it’s like nobody’s a stretch, we’ll just ask. Sometimes we get ignored. Sometimes we get yeses. Very rarely do we get nos. So it’s been really great.

TAYLOR: Hi, this Taylor again from Samford University. Going back to the issue of what your shows are about and your content, was your original idea to do PD (professional development) specific just for your university or with particular subjects or was it always “These are people we want to talk to.” Because I’ve thought about doing PD specific for my university on topic versus guest.

Rebecca: Yeah, it specifically has always been about teaching and learning. So, we run the Teaching and Learning Center on our campus, and so it was meant to substitute for some faculty instead of going to workshops and things that this would be a supplement or an alternative in the way of being more accessible in alternative format for folks who might need it. I think it’s always focused on being a professional development specifically for faculty, although I think we have a mix of faculty, staff, and administrators who listen,

John: But, it’s primarily interviews with people who are very skilled in the specific thing that we’re talking about. So, we find people who are doing interesting things, interesting applications, or interesting techniques, and then we interview them. So, it is professional development, but it’s generally professional development using experts on that particular topic.

Rebecca: And think we tried to find a mix between people who are doing research on particular topics as well as faculty who are implementing things in their classes so that we have examples as well as research to back some of those things up.

JUDIE: Do we have time for one more?

Rebecca: Yeah, I think we have time for one more.

LUVON: Typically at our school, we normally have to go through our communications and marketing department. Do you have any issues having to do that?

John: We didn’t tell anybody. We just started doing it. [LAUGHTER] And by the time we had a national and international audience, they were actually pretty pleased with it. So, I don’t think our Dean or Provost discovered it until we had been doing it for a few months. And they started hearing about it from other people.It’s gotten some good favorable reviews from the administration, but I think we found it easier just to do it without going through those channels.

Rebecca: Well, and then, actually, our communications office did a feature story on our hundredth episode. So I think we’ve got buy in now.

John: Yeah.

Rebecca: After 100 It’s a thing.

John: Yeah. So we did it. It worked and then we got the buy in.

Rebecca: Always. [LAUGHTER]

John: We may edit that part out .[LAUGHTER] But we did have our Provost on the podcast.

In that document that we shared with you, we have details on setting up your own in terms of microphones, hardware, low-budget ways of doing this more expensive ways of doing this. So, there’s a lot of resources there. And if there’s anything we can help you with, just send us an email and we’d be happy to give you some assistance.

Rebecca: And so we always wrap up by asking, what’s next? John, what’s next?

John: I’m going to DIsney World… I’m going to continue with the conference, go back and work with my students for the rest of the semester.

Rebecca: And I’m going to be on sabbatical in the spring. That’s what I’m going to do… and so look forward to some recordings with some guest hosts while I’m away. I’ll still be recording some, but we’re hoping that some of our previous guests will come in and guest host while I am away.

[MUSIC]

[APPLAUSE]

John: Thank you.

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

112. The Gig Academy

Over the last several decades the proportion of classes taught by tenure track faculty have decreased while student support services are increasingly  being outsourced to third parties. In this episode, Tom DiPaola and Daniel T. Scott join us to discuss the impact of these shifts on students. Tom and Daniel are  (with Adrianna Kezar) co-authors of The Gig Academy, Research Assistants at the Pullias Center for Higher Education and Fellows at the Urban Education Policy PhD program at the USC Rossier School of Education.

Show Notes

Transcript

Rebecca: Over the last several decades the proportion of classes taught by tenure track faculty have decreased while student support services are increasingly being outsourced to third parties. In this episode, we examine the impact of these shifts on students.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together, we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

John: Our guests today are Tom DiPaola and Daniel T. Scott, two authors, with Adrianna Kezar of The Gig Academy. Tom and Daniel are Research Assistants at the Pullias Center for Higher Education and Fellows at the Urban Education Policy PhD program at the USC Rossier School of Education. Welcome.

Tom: Thanks for having us.

Dan: Yeah. Thank you.

John: We’re glad you could join us.

Our teas today are:

Tom: Just for this interview, I busted out my favorite Jasmine pearl tea. So, I’m enjoying a nice cup of it while we chat.

Dan: And I have a coffee. So, a caffeinated cousin. [LAUGHTER]

Rebecca: I’m drinking English afternoon.

John: And I’m drinking black raspberry green tea. We invited you here to talk about The Gig Academy. Actually, before we talk about that… You’re both graduates of a SUNY school, aren’t you?

Tom: Yes, indeed.

Dan: Yeah.

Tom: SUNY Purchase… proud alums.

John: Did you know each other before you moved to USC?

Tom: We did as a matter of fact, we both studied philosophy with some of the same folks at Purchase. So, we did know each other, though, we’ve grown considerably closer in recent years.

Dan: Yes, We’ve really been following kind of parallel career paths.

John: And writing a book together would extend that a little bit further.

Dan: …a dream.

Tom: Indeed. [LAUGHTER]

John: In The Gig Economy, you talk about the replacement of long-term employment relationships with contingent labor. Could you tell us a little bit first about your own experience in academic labor markets?

Tom: We’re sort of in an interesting position. I am in my final year of the PhD. And so I’m at this point where all of these things that I’ve been reading and thinking about and hearing horror stories in the media and through organizing for years is finally the world I have to dive headlong into. And so trying to approach the job search in a measured way that was mindful of self-care needs and of the things that I know are controllable and not controllable. But, before this, I worked at Bronx Community College, actually, and I sort of lucked into that work. I didn’t go through a crazy competitive process to do it. And then coming at it from a faculty angle now is considerably more intimidating, but hoping for the best. Everyone I know is just sending out everything… everywhere they can… hundreds of applications… while they’re trying to complete their dissertations, which is fun.

Dan: Yeah, and I’m one year behind Tom, in terms of proximity to the labor market. So I’m feeling a bit less of that pressure at the moment, but getting ready. And on my path to this position here, between completing the BA at Purchase and joining Rossier I had a few different staff support type roles in different higher ed institutions. I worked at Purchase at Borough of Manhattan Community College and in those roles I was supporting student support programs. And so, in some ways, I was a part of this growth in support staff that we discussed a little bit in the book. Some of those positions were contract positions. And so that I felt that contingency as: “I know my contract is coming up and I need to line up another job.” But others were more permanent. So, I got to experience a good range of staff role proximities to contingency.

John: Could you tell us a little bit about the scale of the shifts in the academic labor market?

Tom: Sure. And I think it’s important to note right out of the gate that one of the ways that we tried to approach this subject differently in this book, is by trying to consciously move past the discussions exclusively about academic labor. And I think that we’re at a time where the amount of adjunct exploitation is sufficient enough that that’s becoming a household issue; where even non-academics know that this is a problem. And the conversation does typically tend to revolve around adjuncts and other forms of contingent faculty and PhDs who are out of work. But it’s actually a much bigger thing. And that’s sort of what we’re trying to argue here… is that this is part of a much larger restructuring project, both of the university and of society at large. And that’s partially why we thought the term “Gig Academy” was apt because it’s talking about the entire post-secondary structure, and it’s trying to link it to these other larger cultural shifts around how we value contingent work in society. So, it’s important to note that, while I’m sure we’ll spend a lot of time talking about academic labor, and that may be sort of what the audience is most interested in hearing discussions about, it’s important to remember that it’s everyone: the overwhelming majority of all non-managerial labor in higher ed is contingent, temporary, insecure, poverty waged. And the reason it’s important is because when it comes to talking about solutions and things to do about it, we have to look for all of the channels of solidarity that we have. And so that necessarily includes going outside of simply the precariat, who are instructional labor. And we need to think much more broadly about what that kind of organizing could look like because it’s a question of power and what’s happened to power? That’s the initial comment I would make. Dan, do you want to jump in?

Dan: Yeah, we know one of the most cited statistics is that nearly three quarters of all instructional staff are now contingent labor, but the shift towards contingency has been occurring among all other forms of roles as well. For example, it’s reported that 32% of office and administrative staff are now part time and this movement towards part time this and contingency happens to everyone because the compensation for every role can be cheapend through reducing benefits by shifting to part-time status and reducing hours and then also combining multiple positions into one. And then the biggest problem though, and the reason why we have these two kind of disparate statistics in a few other numbers throughout the book is that there’s not good data that has been collected about work in academia, whether it pertains to contingency employment outcomes for PhDs, or the working conditions of other staff.

Tom: Yeah, there’s only so much information you can glean from the Bureau of Labor Statistics. And these are not sexy subjects of research. And folks aren’t necessarily interested in institutional research offices of aggregating this data because it could reflect poorly on them. In fact, one of the similarities you notice is in the way that this kind of mass casualization allows for a selective reporting of diversity statistics. So, institutions can give the race and gender breakdowns of their faculty in aggregate, and it looks like it’s a much more diverse workforce than it is in reality. And this is the same sort of thing that companies like Amazon do, who overwhelmingly have low wage workers of color and warehouses and overwhelmingly white male, highly paid tech workers working on the platforms, and they just combine those together to make it seem that these jobs are more equitably distributed than they are. And that’s part of how this consolidation of power over time has played out. And it’s part of this larger project of neoliberalism. And some folks are hesitant to talk about neoliberalism for understandable reasons, because it’s a word that is thrown around a lot casually and used with some, if not imprecision, at least without proper contextualization. And it’s a word that needs contextualization to talk about because it could mean anything from some cultural quality that you’re describing to mode of power or an ideological tendency or an organizational structure or chains of authority or a sense of identity… you could be talking about personal identity in terms of neoliberal tendencies. And so it’s really important to always specify upfront, when you want to invoke these concepts, what it is you’re actually talking about. So, here we’re looking at the political economy side of it, and how this interfaces with the history of higher ed, because it was in the 70s and 80s that we saw the rise of Reaganism and Thatcherism and we saw this broad disinvestment in the public sphere.

This is in many ways, a well trodden story that plenty of academics know well. But it’s worth recapitulating because as this public disinvestment was happening on a large scale, and unions were ill equipped to contest it because of the way that they had, in their own history, become somewhat exclusionary and focused on backdoor negotiations as opposed to rank-and- file strategies that actually mobilize the base and democratize the process. So, all of these things converged with new opportunities for universities to seek revenue through market mechanisms and other things that get broadly roped under corporatization. But after the Bayh-Dole Act of 1980, which Dan could probably say a little more about, but the consequence was that a lot of resources got shifted into more higher return producing ventures around intellectual property because they could capitalize on those things in ways they couldn’t before, even when public monies is used to produce these things. And so it changed a lot of the incentives, and institutions wanted the highest return for the time and energy that was being put in within any domain. So, for faculty (for star faculty, especially) the return to the institution for them teaching an intro level class of 30 students is comparable to the return that they get when an adjunct does that and an adjunct costs $3,000 a semester to teach that course and a tenured faculty member costs $150,000 a year and lots of benefits and other things… and so in redirecting a lot of the efforts, and as part of the scientific management revolution, where the point was to optimize production, we saw a lot of the shifts and it had consequences for the power structures that increasingly guided these institutions. And so the important thing about calling this the “Gig Academy” is because even though 20 years ago, Slaughter and Rhodes had their landmark work, Academic Capitalism and the New Economy, and that sort of introduced all of this new thinking about how neoliberal restructuring has changed higher ed, they were mostly focused on this kind of external profit-seeking, venture-seeking, financialization, restructuring, some of the research and those sorts of things. And they do talk about casualization, of course, and they do talk about the workers. But, this has been 20 years of this being the norm. It’s no longer the ascendant regime. It’s actually the dominant mode through which all of this is done. And so we’re in a dangerous new state of emergence around what this means because it’s changed at such a broad structural level that it warrants a new term to account for the ways that the relations of academic production have been comprehensively restructured.

John: You mentioned that this is not just in the professoriate, but it’s more general. And in the book, you talk a little bit about outsourcing of many activities that used to be done by full-time employees of the college to other businesses outside. Do you want to address that just a little bit?

Tom: The point about outsourcing and David Wiel has a good book about this phenomenon sort of generally called The Fissured Workplace. And part of how this operates is you take auxiliary functions, non-essential functions, and you find that the easiest way to optimize and render cost effective these things is to outsource to third-party contractors. And this has a number of benefits from the perspective of the institution’s executive administration. For instance, with maintenance staff or food-service staff, housing staff, those sorts of workforces can be administered through third-party contractors (Cisco, Aramark, etc.) who manage their own hiring, who make their own schedules, who have their own internal protocols, etc. So, you’re taking, in some cases, workers who before enjoyed substantial benefits of being university employees for a long time. And there’s lots of stories about institutions that have these historical ties to the community. For example, where we go, USC has always taken pride in this and lots of local families have histories working for USC in non-academic capacities. And that that was a community sustaining way of having work and income and also participating in something that’s larger in some ways than just a service job because you’re still part of this institution. You can get tuition remission and you can have access to the health care that other workers have and so forth. And so they’ve shed a lot of these things. Most institutions have shed those arrangements in favor of these blanket arrangements with third-party contractors who can just bring in an endless procession of part- time contingent workers to do that work. And there’s very little risk of cross organizing in the way that they might fear where if you look back, for example, at the Justice for Janitors movement about 30 years ago in the early 90s (and that actually happened to involve USC) it was the maintenance workers trying to unionize while the institution was trying to outsource that work in general. And what happened was like faculty and students and local community organizations and immigrant right groups all came together and the unions and the maintenance workers themselves and so it was this broad-based effort to resist part of that outsourcing and ultimately, they were outsourced and they unionized, but even when they unionized that power that they claimed couldn’t be directed easily at the university because now they negotiated with the labor contractor. Universities who do this has its hands washed of that. It no longer has to concern itself with that. It no longer has to concern itself with whether these workers have access to health care or more than a poverty wage and so forth. They’re not part of those immediate considerations. And so they do that as much as possible in order to fragment the campus to make sure that power over these workforces is as centrally administered as possible in order to control cost and control risk. And so yeah, the outsourcing of staff… and we’ve even seen this with administrative staff… so, it’s not just the service work that we traditionally think of around food and housing and maintenance and so forth. But, even once you get into administrative staff and other knowledge work, you find these same things. So, it’s really like looking at these patterns across every stratum of the workforce.

John: From the standpoint of students, one advantage of this is it keeps costs lower, but you also talk a lot about the costs of this to students. What are some of the negative impacts to students of having this contingent labor force in higher ed.

Dan: Yeah, so the increasing levels of contingency that staff, faculty, custodial staff, professional staff, all types of staff basically, experience means that they have lower bandwidth mentally… fewer resources to offer when engaging with students. From the faculty perspective, you can’t necessarily hang around after class for a half hour talking to various students about their interests beyond what happened during class discussion. If you have to run to go catch your other job at the other university because the current one where you’re working doesn’t offer you a full-time role or doesn’t offer a role with enough pay so that you have to work multiple full-time roles and then connected to what we were just talking about in terms of the outsourcing dimension, universities are outsourcing advising staff and other staff that perform interactive and supportive, engaging roles with students as well. And so with that comes an increasing formalization of those interactions. So, that instead of me being the academic advisor that you come to, and we’re talking about your personal life, and maybe I’m sharing about mine a little bit, and there’s this kind of interpersonal connection with a permanent staff member located physically at the university. Instead, you’re dealing with someone who might be working remotely to provide advising services and is basically just trying to make sure that they cross all their t’s and dot all their i’s to satisfy the requirements of their particular engagement with you so they can move on to the other several hundred students that they’re responsible for in their caseload. And so, generally speaking then, the increasing move towards contingency and outsourcing means that staff are less connected to the university and therefore less connected to students and then they’re also just dealing with priorities beyond making students feel like they really belong and connecting them in this deep sense.

Tom: And because this is in the context of a broader devaluation of teaching, it has consequences for the quality of instruction that often get pinned on adjuncts, or other contingent faculty as lacking care when, in reality, the incentives are such that it’s almost impossible to avoid certain things being diluted. For example, there’s a lot of talk about academic freedom these days and what’s happening to academic freedom and people are scared of teaching X, Y, or Z because they’re sounding the warning alarms about cancel culture or whatever the case may be. But, for contingent faculty, the concern is mainly getting rehired. And to the extent that you are part of a lecturer pool where you’re interchangeable with a lot of other people, the folks in charge of hiring you semester-to-semester are likely to consult and put an overemphasis on things like student feedback, and also passing rates, these simple kinds of metrics that they can look to to decide whether someone’s worth rehiring, and those can be gamed, obviously. You can lower your grading standards and the complexity of your assignments. And you can avoid controversial topics that would benefit students to talk about… and that you want to talk about. But, you have these other concerns that understandably take precedence and that’s on top of all the burnout and general overwork and under pay. So, you’re getting paid $3,000 for this semester, whether or not you come up with a really thoughtful critical pedagogical approach, or if you just use the cookie-cutter syllabus that they give you when they bring you on for that course. So, there’s not a lot of opportunity to perform well and for faculty to self-actualize in that way, because those incentives are so misaligned. So, the learning suffers in that way, too. It’s not just… although it’s a huge piece of it… that you’re losing personal connections to others and to people you learn from, which we know through educational psych studies that this is important. These relationships are how we learn. Learning, absent some kind of community of that learning, is usually much more difficult, which is at least partly why it’s been tough to shift to a MOOC model of administering higher ed through these just massive open online courses, where you can get generic competencies in things. And that’s in part because your human brain needs these social connections in order to effectively learn… because you don’t just learn and it’s done. It’s an ongoing process, obviously. And we have relationships to sustain those. I mean, I’m still close with my college advisors, and I went to a SUNY school. I feel like I was spoiled by that by being right sort of in the middle of the institutional tiers where the faculty weren’t under publish-or-perish pressure and pedagogy and good teaching was valued by the institution enough that you could have these really meaningful experiences and form meaningful bonds and you got advised by your actual professors. And it was very easy to develop a romanticized view of the academy when I was 18, based on what I knew of my philosophy and literature professors at SUNY Purchase, and then you see both extremes once I started working at community colleges that were under-resourced, and I saw how much people struggled to make things work. And even when they cared very deeply about what they were doing, and the students and everything, being spread so thin… and then at the other end just like upper-elite crust of private research institutions where teaching is not valued as much because it doesn’t ultimately bring new returns to the university if you’re a great teacher; it does if you get that extra grant or you publish that extra article or so and so forth. So, the odd thing is, in a lot of cases, the higher up in the prestige of the university, the more likely you are to encounter questionable pedagogy because of these misaligned incentives, which isn’t to say there aren’t great professors everywhere. It’s just the these are structural limitations.

Rebecca: We’ve talked a little bit about academic rigor potentially being influenced by contingent faculty because of incentives. But, it strikes me that we had a recent episode with Julie Martin, Episode 104, about social capital and how social capital is really important for first-generation students. So, can you talk a little bit about how contingency across staff and faculty impacts this group or this population of students more so than some other groups of students.

Tom: We could easily make a case around how, without these connections through the institution, students are worse off. They don’t have sources of support and advice and connections. Students who are advised through this kind of highly efficient Tayloresque process where every activity has been unbundled from everything else… they don’t have that network to resort to and they can’t get that advice. And for first-gen students whose parents did not go to college, obviously they’re missing out on a lot of informal guidance that other students get and so these institutional relationships can be really important substitutes for that. Whether or not you have social capital is a reflection of your class status or where you fall in the stratum. But, as a place of intervention, it doesn’t actually help if the overarching economic and political structures are the same. There’s a lot of well intentioned interventions in higher ed designed to increase the amount of social capital that students have. And there’s a lot of like private funding behind these initiatives for good reason, because it doesn’t threaten the larger power and economic structures. Providing social capital can be helpful at a small level, but at a structural level, it’s impossible to move the needle simply by trying to supplement social or cultural capital.

Rebecca: I think you misinterpreted what I meant, because I wasn’t implying that we should have interventions but rather that when the structures are taken down where faculty aren’t playing the role of an advisor or have this ability to be integrated into the structure more, in a role other than just teaching their class.

Tom: Oh, absolutely.

Rebecca: So… [LAUGHTER]

Tom: Sorry. No… I wasn’t… I was totally in agreement with you. And then I wanted to go the step further, because I think some folks could listen to that and say, “Well, the answer to that is to just create a separate platform where we put at-risk students in touch with people who are going to increase their social capital.”

John: The focus of that research that Julie Martin had done was basically on the degree of connections that students have with their fellow students, with faculty, and with a college in general. And what her research was basically showing is that first-generation students come in with much weaker knowledge of how colleges function, and much of that information is picked up through interactions that they may not have, and it raises their probability of dropping out, failing, or withdrawing from college. And so basically, I think what Rebecca was arguing is that the impact of having more contingent staffing of colleges is likely to have a differential impact on first-generation students who are less likely to be successful in completing the degree. And that’s I think, where this impact could be fairly substantial. Because what it means basically is the people who potentially have the most to gain in terms of higher future income and careers are going to be placed most at risk, while those who come in from wealthier family backgrounds are more likely to be successful because they come in with more of that knowledge from their past experiences.

Tom: Yeah, exactly. I totally agree. And I think that’s basically what we argue in the book. I didn’t mean to seem that I was pushing back on that argument. It’s an important one, and it’s definitely true. And I think it goes even further to some extent, because the increased likelihood that someone drops out for these reasons, falls also disproportionately on students of color. And when that happens, those students are also more likely to have debt that they then are at risk of defaulting on particularly if they didn’t finish the degree. And we know there are lots of studies of the student debt crisis that show how this disproportionately falls on students of color and students of color who weren’t ultimately able to complete their degrees or who got roped into predatory for profit-schemes.

Dan: I feel like the takeaway is that whenever you reduce institutional supports for the most marginalized groups of students, whether its first-generation students, which we both are, by the way (shout out to first-gen students), working class students, racially minoritized students… Yes, the most marginalized groups always end up suffering with the removal of these formal supports because folks with capital can supplement their college experiences through college advisors, like private advisors, I mean, and through other forms of third-party support, to help them gain knowledge and help them gain navigational understanding for gaining success through higher ed.

John: What solutions would you suggest? What can be done to remedy the situation?

Dan: I think one of the biggest things is for workers in higher ed, workers in other industries, basically to start applying their own agency and concern towards addressing these issues. We’ve seen increasing levels of unionization among workers in different industries, and especially in higher ed. But, these trends are not going to reverse themselves and the executive-level decision making that has contributed to it, in addition to the broad entrepreneurial mindset that is a part of American culture, these things are not going to just go away. And so it’s important for workers to recognize that through organizing and developing collective power, we can start reshaping some of these trends. That seems like among the most important dimensions for me,

John: You just suggested that unionization rates have been increasing. Is that the case for higher ed? I know there’s been at least some increase in graduate student unionization. But, is that true generally?

Tom: It is true generally. The impact could be modest depending on which class of worker you’re talking about. I think graduate students are significant to note because they are the ones who seem to be really working to shift the paradigm around how and why we’re organizing. And as we live in a sort of post-Janus world where there are less structural legal protections around unions and unionizing and bargaining, there’s been a deserved shift back to focusing on issues of power and how you actually accumulate enough raw collective power to compel institutions to act in ways that benefit the student body and that benefit the workforce, and not just the endowment and not just the board and the real estate interests that the institution may have. And so this is why graduate workers are becoming more militant and organizing more effectively around these social movement unionism principles that has a larger agenda than simply increasing the attractive terms of our contract. It’s moving beyond this “Let’s just talk about pay and insurance and compensation. And let’s have bigger conversations around structural issues.”

We gathered some strength, I think from the K-12. teachers unions and the really inspiring strikes and other actions that we’ve seen actually yield important wins for these folks. And we’re starting to see the value of being able to actually throw a wrench into the gears of production itself in order to be heard and to have demands taken seriously and concerns taken seriously and to redistribute power. And it’s important to look at the broader social trends around labor activism and how this is getting expressed in certain circles of higher ed, and we’re trying to advance that conversation in part because for all of the controversy around unions and some people, particularly older folks who remember the decline of the union movement have mixed feelings or bitter feelings about the unions and how they act and what they do. At a really basic level, a union is just workers coming together to act collectively and exert leverage over their managers and employers, and over thus the conditions of their work. And so we’re finding new and interesting ways to push those battles and have those conversations outside of conventional union strategies like 50% plus one elections. And we’re focused more on power, which I think is really crucial, in part because of the way that this connects to the broader gig economy. And the way that we’ve normalized this idea of the independent contractor and this following your passion and everyone is their own brand and all these other ideological tendencies that end up just allowing these flexible labor markets to work more smoothly for those who are skimming rent off the top. And that’s what we see… universities are the same sorts of platforms in a lot of different ways. And for something like Uber, the contingent labor force that keeps Uber running is a temporary solution. They see it as a temporary fix for a long-term game, which is just to automate everything to have perpetually money making robots, roving the city who never need breaks and benefits. And so the drivers are really like, to the extent that they’re getting paid in peanuts to do this work… and all the maintenance costs are being offloaded onto workers, as opposed to being a responsibility of the employer, which is something we see in higher ed as well. Because, if you’re an adjunct and you have to work at three different universities, teaching intro-level classes to make ends meet, you’re shouldering all of the costs of the academic supplies you need and the car that you take to get from campus to campus and the gas that goes into that car and whether or not you have memberships with the necessary learning platforms to be able to interface with whatever student learning management system is being used. And so these are all like these micro ways that costs are being offloaded onto workers, and that this is turning into a convenient form of control for the institution at large.

John: You’d advocate basically then a larger role for unions, and then would the unions be lobbying for perhaps less use of contingent labor?

Tom: I mean, sure… In the short term, there may be ways to try to compel institutions to both improve the working conditions and pay and compensation for contingent workers. But, the goal would be to really eliminate this by democratizing the power structure. And it’s on all of us to do that, because the goal is to ensure that the decisions being made at the university are being made democratically and are being made by people who have the interests of students and scholarship at heart and not purely business or market interest. And to get there, we have to look well beyond the old structures of faculty governance. It’s not going to get us there just to bring a few nominal adjuncts into governance meetings and curriculum committees and so forth. We need to fundamentally redistribute the power at universities that had been siphoned in really small doses for so long. Because as the number of faculty in secure positions was dwindling, a lot of the responsibilities of faculty, in a kind of organic sense, were being shifted into administration. And so this is how we ended up in a situation where the amount of tenure-track faculty was languishing while the number of PhD students brought in is spiking. And the number of contingent faculty are spiking, because there are all of those incentives. And for the faculty who have security, their main concerns are doing research and trying to do an increasing amount of work that they have to shoulder among an ever smaller population… around governance and searches and so forth. So they’re all too happy to let the administration deal with hiring adjuncts and all these things. But over time, it’s been this gradual relinquishment of power to the point where tenured faculty have so little power, they’re afraid to even use it. And it should be the number one priority of anyone on the tenure track or with tenure, to stand in solidarity with the contingent workers. Because, that is the only way you can ultimately guarantee the longevity of academic freedom and all of the other rights that you enjoy, because you need that power. Without the power, you’ve got nothing. And so that’s one way we want this book to function is to make folks realize that the kinds of artificial divisions that we see among faculty who are on the tenure track who are doing the scholarly work versus those who are kept and cycled through various contingent positions, we absolutely need to bridge that gap. And it would behoove anyone with any power and security to join that fight. So, yeah, it’s going to take organizing. Unions are an important part of it. We have to look beyond unions. We have to think about broad based organizing through every possible vehicle that we can.

Rebecca: We always wrap up by asking, what’s next?

Tom: General strike. [LAUGHTER]

Dan: I’ll say, for the Delphi project, another avenue that we’ve been pursuing in terms of supporting shifts in the structures and political economy of higher ed institutions, is to recognize institutions who’ve made novel changes to support non-tenure-track faculty. And so we’ve been offering this award in recognition, which names their excellence and also provides a little bit of funding to them. We’ll be going into the third cycle of the Delphi award this upcoming year. Right now, we’ve been working to communicate the nature of changes that have been made for the last year’s winners. And so we’ve been recognizing that and then we’re also involved in working with graduate student union organizing, at USC specifically. So, that’s another big next for me personally. And then we have a couple grants to study, again, institutions that are making novel changes to transform the nature of teaching and learning.

John: Well, thank you for joining us.

Tom: Thank you. This has been a great conversation, and thanks for reading the book and inviting us on here.

Dan: Yeah, thank you so much.

Rebecca: Thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.