118. Biases in Student Evaluations of Teaching

A growing body of evidence suggests that student evaluations of teaching are subject to gender and racial bias. In this episode, Dr. Kristina Mitchell joins us to discuss her recent study that examines these issues. After six years as the Director of Online Education at Texas Tech University, Kristina now works for a science curriculum publishing company and teaches part time at San Jose State University.

Show Notes

  • Chávez, K., & Mitchell, K. M. Exploring Bias in Student Evaluations: Gender, Race, and Ethnicity. PS: Political Science & Politics, 1-5.
  • Colleen Flaherty (2018). “Arbitrating the Use of Student Evaluations of Teaching.” Inside Higher Ed. August 31, 2018.
  • Disciplinary organization statements on student evaluations
  • Peterson, D. A., Biederman, L. A., Andersen, D., Ditonto, T. M., & Roe, K. (2019). Mitigating gender bias in student evaluations of teaching. PloS one, 14(5). – A study that indicates that informing students of the bias in student evaluations mitigates the bias.

Transcript

John: A growing body of evidence suggests that student evaluations of teaching are subject to gender and racial bias. In this episode, we discuss a recent study that examines these issues.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by

John: , an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together, we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

John: Our guest today is Dr. Kristina Mitchell. After six years as the Dir ector of Online Education at Texas Tech University, Kristina now works for a science curriculum publishing company and teaches part time at San Jose State University. Welcome back, Kristina.

Kristina: Thank you.

Rebecca: Today’s teas are:

John: Diet Coke?

Kristina: Diet Dr. Pepper, actually. [LAUGHTER]

John: Oh… I’m sorry.

Rebecca: Switching it up. [LAUGHTER]

John: And mine is Prince of Wales tea.

Rebecca: I have Christmas tea today. I know, I switched it up.

John: Ok.

In one of your earlier visits to our podcast you discussed some of your earlier work on gender bias in student evaluations. We’ve invited you back today to discuss your newest study, with Kerry Chavez, entitled “Exploring Bias in Student Evaluations: Gender, Race, and Ethnicity.” Could you tell us a bit about the origin of this new study?

Kristina: Well, one of the things that seems to be inevitable when someone publishes a study on bias in student evaluation is that there’s always a reluctance to believe the results by some in the community. And most often there will be some question about what was being controlled for or how the selection was done or the sampling or the research design. So really, the first impetus was just to shore up the existing findings and continue to demonstrate the potential bias that might exist. But, in addition, there’s a real dearth of research on race in student evaluation. So, the research on gender bias and student evaluations is becoming more and more robust, but there’s not very much yet on race and ethnicity. And so we were presented with the opportunity to do… it almost presented itself as a natural experiment with 14 identical online sections of the course and with different professors in each one with different genders and races and ethnicities. So, we took it as an opportunity to shore up the gender literature and expand the race literature.

John: And so, the only difference in the course, was the welcome video, if I remember?

Kristina: That is the only difference in the course. Everything about the course: the lectures, the assignments, and even the emails that the students received when they were corresponding with the course and instructor. Those were all identical. We had a course coordinator, which was me, and I was sort of the behind-the-scenes person who was filtering through all the emails to make sure that the students were getting the same tone, the same style, everything the same about how they were interacting with their course.

John: And how long were these videos.

Kristina: The videos were up just about three minutes in length Everyone read an identical script that just told them the professor’s name and sort of had a generic message about how they were looking forward to a good semester. It was a summer course, it was just a five week course. And that was the extent of the students’ direct interaction with the professor in a way that wasn’t filtered through a course coordinator.

Rebecca: Although they all thought they were interacting with the instructor, right?

Kristina: Yes, they all were told that this instructor was theirs. And of course, the instructor was instrumental in the management of the course, we just made sure that the professor was not directly facing the students without it being filtered through a coordinator, just to make sure that each professor was responding with the same tone and the same information,

John: …which sounds like a lot of work for you.

Kristina: It was a lot of work for me. But fortunately, it really allowed us to control for literally everything. We controlled for absolutely everything that you could control for. When I was doing the research, when I was compiling all of the data, getting everything ready, I was just thinking to myself, “Surely there’s no chance that I’m going to find significance.” Like “All of this was for nothing, I’m going to either have to publish a null result that could potentially undermine other people’s research on gender and racial bias.” I just thought, “There’s absolutely no way we’ve controlled for far too much for there to ever be any bias.” So, it was just astonishing to find that even with all of that control, we still found a statistically significant difference. Even with a small sample.

Rebecca: Can you talk a little bit about how many students and sections were involved?

Kristina: So, there were 14 different sections, each with a different instructor and about 200 students per section. And the students enrolled in the sections, all at the same time, when registration opened. There wasn’t necessarily any reason to think that any particular section was characteristically different than any other section. They all kind of filled up about the same.

Rebecca: Did they know the instructor name and things ahead of time when they registered?

Kristina: They did. When they registered, they were able to see what the instructor’s name was. But considering that, once again, the eight sections at a time that would open up for registration, and these were intro classes that every student needs to take to graduate, we didn’t really think that there was any reason to believe that students would be drawn to any instructor, especially since it’s an online course.

John: When we talked about an earlier study, you mentioned that this was sort of like a jobs programs for political scientists in Texas.

Kristina: We always joke that Texas, having made it required for students to take two semesters of political science to graduate with a public university degree in Texas, we call that the Political Science Professor Full Employment Act, because it ensures that we will have many students needing to take our classes in Texas. Unfortunately, now that I’m in California, only one of those classes is required. So, it’s slightly less full employment, although, I’m still getting to teach both online and face to face here in California.

Rebecca: Was there both a qualitative and a quantitative component to the current study?

Kristina: Sp, this one, we focused primarily on the quantitative component. In our earlier study, we spent a lot of time doing text analysis of the comments that we received. In this study, we didn’t do anything quite as rigorous as a full content analysis, in particular, because the number of comments was so low. But we did review them, we looked through them, and we did code them sort of as a positive or a negative comment. And the reason that we did this is because there really shouldn’t have been any reason for any difference in comments whatsoever. Once again, other than the welcome video, students never were directly interacting with a professor in a different way. So, for example, if a student emailed a professor and the professor needed to respond, the professor would tell me as the course coordinator, the messaging that needed to go out… you know, the answer to the question that the student needed, but I would compose that in my own words. So that means that all of the responses would be filtered through the way that I would say it, as me, the course coordinator. So, there’s no difference in the kinds of interactions that students had with the content with the course or with the professor. And yet, we still found that women received negative comments, A]and men did not. One of the professors who was in the study, he was laughing and saying he was going to keep his incredibly positive review in his tenure file, because he was told he was the most intelligent, well spoken, cooperative professor that the students had ever had the chance to encounter. And once again, those were my words. I was the good one. So, the professor just was laughing and saying he was going to include that in his promotion file, even though he didn’t do anything. Whereas women, we saw comments like “She got super annoyed when people would email her” and “did not come off very approachable or helpful.” It was me, it was always me. They were both hearing my words, but because they were filtered through someone of two different genders, they perceive them differently. And that’s really consistent with the literature that shows that students expect women to behave in nurturing ways: to be caring, to be helpful and friendly, whereas they view men as competent experts in their field.

John: In terms of the magnitude of the difference, how large was the average effect of the perceived gender of the instructor?

Kristina: So, when we look at just the overall average evaluation score between men and women, we saw about a 0.2 difference. So, on a scale of five that may or may not be substantively important, and that’s a question that, of course, still remain, whether the 0.2 difference is important in a substantive way, but given that student evaluations are used in promotion,hiring, and pay grade decisions, any statistically significant difference is concerning, especially in a situation like this where we controlled for everything. When we looked at the white versus non-white difference, just looking at the overall average, we didn’t find a significant difference. Those significant differences didn’t start popping up for ethnicity, until we used an OLS regression and included final grades as a control there as well.

John: How did you measure the students’ perceptions of their instructors’ ethnicity and gender? While gender may often be correctly guessed by watching the instructor’s welcome video, ethnicity may not always be obvious. What did you do to assess this?

Kristina: Absolutely. So, it is a little bit more difficult to decide whether a student will know what ethnicity there professor is. So we did, ask both for gender and ethnicity because, of course, gender isn’t always obvious. But we decided to show pictures of the professors to a group of students who were Texas Tech students, but who were not enrolled in any of the courses. We just showed pictures of the instructors and asked the students to tell us what they perceived the person’s gender to be, and if they perceived the person to be white or non-white, and so we used a threshold of: if 60% of the students perceive the professor to be non-white, then we said, “Okay, then we’ll count this person as non-white, whether or not they identify as that or not.” For example, we had one professor in the study who is a Hispanic man, but has blond hair and blue eyes, and so none of the students accurately identified his ethnicity. So, we didn’t count him as non-white in the study because the students perceive him as being white.

John: Were the names informative in cases like that.

Kristina: In that case, the name perhaps could be informative, the very long and complicated Venezuelan name, but that might not initially look to students as a Hispanic name. So, students might see Garcia or Gomez and think Hispanic person, they might not see Sagarvazu and think Hispanic person. Other names that might give students more of a clue of non-white were our Asian facultes. Some of those names could potentially give the students a hint in advance of what ethnicity their instructor was going to be. But again, we don’t really think that students were choosing these online sections based on the professor’s name, especially because students were used to the idea of just taking introduction to political science online at Texas Tech University, and likely weren’t really thinking which Professor should I choose?

Rebecca: So given these results, which should we be doing?

Kristina: You know, I have been saying a long time that the use of student evaluations in hiring, tenure, promotion and pay decisions should just be outlawed. It’s absurd that we’re still using this. I understand that there is a need to measure teacher effectiveness, especially in terms of how students are learning. So it’s really important to try and find alternate measures of this because student evaluations of teaching are flawed for so many reasons; one being students aren’t really very good necessarily at evaluating their professor’s effectiveness as a teacher. Sometimes professors who are really challenging and perhaps really getting the most out of their students are also getting some low evaluations. But, most importantly, for employment law purposes, these are discriminatory. If women and faculty of color are being treated differently in these criteria or evaluating them differently, then we need to find a different way to evaluate them and

John: You’ve made them good cases here again, and I think this contributes to the evidence on that. What might you recommend that campuses do to provide evaluations of instruction?

Kristina: I think that’s a really great question. I think that we should start with exploring your evaluations of teaching to see if those suffer from the same biases because they may, and they might not be a better alternative. Other things that might be worth exploring are portfolio-based evaluation… so, allowing professors and teachers to tell their administration why they’re a good teacher, instead of looking for some objective measure of this, I think teachers and professors who are intentional with their practices would be able to put together a really successful portfolio that would show their administration that they are effective. There’s also some talk about using assessment-based measures, things like standardized testing or exit exams or student portfolios. Those might suffer from problems as well. And one thing that I found, especially now as people in the law profession have started reaching out to me for my insight on these kinds of cases, is that it’s really difficult to show in a court case that we should get rid of a discriminatory practice if there’s not an alternative to that practice. So, what attorneys have told me is that, “Yes, maybe they’re discriminatory, but if the university needs to measure teaching effectiveness, and we don’t have a good alternate way to do it, a court is likely to just let it stand.” So, I think it’s really important that our next move in the research agenda is to try and find out what practices might be able to measure effectiveness without suffering from the same bias.

Rebecca: I think that’s a really good point to help us understand the urgency of doing these things, and coming up with alternatives and really what the real impacts are, rather than a small difference in pay or something people might write off as being whatever. But, if things are going into lawsuits and things and then just letting it stand, even though you can demonstrate that it’s biased, then I think that makes it a little more urgent for people who might not be motivated otherwise.

John: And while a 0.2 difference may not seem like much, that’s often a good share of the range from the highest to lowest evaluations in departments. So, in terms of the rank ordering of people that can make a very significant difference in the perceived quality of their teaching,

Kristina: Especially when departments sometimes use a “Are you above the mean or are you below the mean…” 0.2 could very well kick you above or below the mean in terms of your scores, which, you know, also seems Like a really bizarre way to measure whether you’re effective… if you’re above average than you are, if you’re below average, then you’re not. I’m not really sure that that’s really an adequate way to measure anything. But, one thing that we have seen is a couple of universities move toward a different way of evaluating their teaching effectiveness. Ryerson University in Canada recently decided that student evaluations of teaching in their current form could no longer be used because of these discrimination issues. And a university in Oregon, I can’t remember if it was University of Oregon or Oregon State, but one of them has just moved to a much more open format of teaching evaluations, where students aren’t just saying 2 out of 5 or 4 out of 5. Instead, they’re asked to provide a paragraph with some insight on the effectiveness and if the questions are worded appropriately, then maybe we can see some real useful feedback, because I know I found a lot of useful feedback in my student comments. Really open-ended comments, I think, can also lead to inappropriate things like comments on appearance or comments on personality, but directed prompts… “What would you change about the workload?” …those kinds of questions… might produce some really valuable feedback.

John: If the questions are on things that are fairly objective that students are qualified to evaluate, that could be helpful.

Rebecca: Sometimes students are really insightful on those things if you’re specific and start with the evidence-based practice, and that’s not the thing that’s debatable, but how it’s implemented, or the scaffolding or the timing, those are all things that could be really helpful. And they often have good ideas about these things if you open up a dialogue with them.

Kristina: Exactly. And I think that using student evaluations in this way is helpful to those of us who teach and I think that comes back down to what is the purpose of student evaluations? Why are we doing them? If it’s to try and improve our teaching practices, then let’s use it for that purpose. Let’s ask them directed questions where they have a chance to tell us what they liked and didn’t like and then let us filter those responses to improve what we’re doing. Instead, we’ve almost turned them into this gatekeeping mechanism to keep people from getting promotions, to keep people from getting hired. And it’s especially punishing to our adjuncts. And as our adjunct professors make up a larger and larger share of the teaching force, the fact that they could be not hired again, or offered fewer classes or no classes at all just because of a 0.2 difference on their teaching evaluations. It’s really concerning.

Rebecca: It’s also in some ways, a way of advocating for making sure that we spend time in the classrooms with part-time faculty and know what is going on. Sometimes we reserve those classroom visits and informal feedback with our peers to only tenure-track faculty rather than expanding that across part time faculty as well. And I think we can all gain insight from seeing a wider range of teaching practices inside and outside our departments across full-time and part-time faculty,

Kristina: And even letting our part time faculty conduct some of these peer evaluations. Now that I’m teaching part time, I really see a difference in what it’s like to be part-time faculty. And it’s great in a lot of ways. It gives you a lot of flexibility. And it gives you a lot of time to have fun with your students. And it’s a challenge in a lot of other ways too. But I think that the lines of communication between faculty and students and between different types of faculty… we can really nail down that as the purpose of student evaluation. I think it would help a lot in making them more useful.

John: One of the approaches that some departments have started to use in terms of peer evaluations is not to leave them too open ended, but to have very structured ones. And some of them involve very structured types of observations where you just record what’s happening at fixed time intervals in terms of who is participating, what is the activity, and so forth. And that, at least in theory, should provide a more neutral measure of what’s actually taking place in the classroom, and could also provide more insight into whether evidence-based practices are being used, which could lead to more positive developments in terms of how people are teaching.

Kristina: Yeah, I think that’s really interesting. I think sometimes it can be really difficult to give or receive a truly unbiased peer evaluation because it’s really easy to start saying, “Oh, the students looked like they were having fun.” What does that mean? That’s not really objective. But I think it’s also important to recognize that a 1 to 5 scale of students saying this teacher is effective is also not objective in any way. So, the idea of there being an objective measure of teaching effectiveness, I think we should move away from that idea.

Rebecca: That’s a lot of food for thought.

Kristina: A lot of tea for thought. [LAUGHTER]

John: That’s true.

But, this is coming from more and more directions now. Several disciplinary associations have issued statements indicating that student teaching evaluations not be used as primary instruments in promotion and tenure decisions. And I think we’re going to be seeing more of that, especially as the research base grows.

Kristina: And there is some good news for the listeners who might be looking for, you know, in the meantime, what can we do about this? How can I help? One recent article, it did a sort of small pseudo-experiment, quasi-experiment, where they gave their students some information about this research before they had the students fill out their student evaluation forms. So, they just briefly told the students that sometimes their student evaluations may be biased based on race, gender, or ethnicity. And they found that it was able to mitigate some of that bias. So, in the meantime, if we’re looking for ways that we can try to address this, it’s important most especially for our allies, who are white and who are men to be advocates in this… to take the time in their classes to say there’s evidence that these evaluations may be biased in favor of a certain kind of faculty member. If we can make sure that messaging is getting out there from the right people who can help, then we can start to mitigate some of that bias.

John: We’ll share a link to that study in our show notes.

Kristina: You know, I think that of course, being a white woman myself, I am more comfortable and qualified in my sort of native talk about gender bias. Hopefully we can get more faculty members of color to join us in this research agenda because it’s meaningful for them as well, because our research is starting to show that this bias exists for them as well. And there’s simply just not enough discussion of that in the conversation. One thing that we did not publish in our study because it was just sort of a side question, but when we were asking students what their perceived gender and race of the pictures was, we threw in a question just for fun to ask them “Do you think you would have difficulty understanding this professor’s English?” because one thing that we hear so many times from our colleagues with accents, is that this comes up regularly in their evaluations. And we threw in this question and what we found is that our Asian faculty members, the students all said… I mean, not 100%, but vast majority of the students said, “Yes, I think I’ll have trouble understanding the faculty members English.” And some of our Asian faculty members speak with heavily accented English and some don’t. And interestingly, our Hispanic colleague that I mentioned earlier with blond hair and blue eyes, has a very thick Venezuelan accent and no students were concerned about being able to understand his English. So, I think these elements need to be brought into the conversation as well. And I want to see, hopefully, people that are sort of more native to that discussion, and that it might be more meaningful for them, join in to start doing this research. If there are any co-authors out there, I’m happy to start a new study.

John: The effects you found for ethnicity were relatively weak compared to the effects for gender. But, with a larger sample size, you might be able to get more robust or stronger results on that.

Kristina: Absolutely. So in our difference of means test, ethnicity didn’t come out as significant. It did come out of significant in our regression, but the substantive effect was a little lower.

John: And you were unable to do interactions because of the size of the sample, right?

Kristina: We only had one non-white woman. And so I don’t think our statistical analysis program would have been very kind to us with only one observation in our interaction term.

Rebecca: So we always wrap up, Kristina by asking, as you know, what’s next?

Kristina: That’s a great question. My current position is in K-12 science curriculum. So I still teach part time, but I’m heavily involved in the curriculum world at the K-12 level now. And one thing that’s been really different is that K-12 teaching is definitely more dominated by women than higher education is and I would love to start looking at how we can get our K-12 students to be primed to think about women and men as equal in the sciences, because thinking about their high school teachers as their teachers, and then they go to college and they see men as professors could potentially continue to exacerbate those biases. So, I’d really love to start doing some research and exploring how we can change our children’s attitudes towards women in the sciences from the ground up.

Kristina: That sounds really interesting.

John: And it’s important work and that’s an area where we certainly could see a lot of improvements.

Rebecca: Well, thank you for joining us, as always an interesting conversation and many things for us to be thinking about and taking action on.

Kristina: Thank you. Always a pleasure to join.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

24. Gender bias in course evaluations

Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than professor? For some, this may sound all too familiar. In this episode, Kristina Mitchell, a Political Science Professor from Texas Tech University, joins us to discuss her research exploring gender bias in student course evaluations.

Show Notes

  • Fox, R. L., & Lawless, J. L. (2010). If only they’d ask: Gender, recruitment, and political ambition. The Journal of Politics, 72(2), 310-326.
  • MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303.
  • Miller, Michelle (2018). “Forget Mentors — What We Really Need are Fans.” Chronicle of Higher Education. February 22, 2018..
  • Mitchell, Kristina (2018). “Student Evaluations Can’t Be Used to Assess Professors.Salon. March 19, 2018.
  • Mitchell, Kristina (2017). “It’s a Dangerous Business, Being a Female Professor.Chronicle of Higher Education. June 15, 2017.
  • Mitchell, Kristina M.W. and Jonathan Martin. “Gender Bias in Student Evaluations.” Forthcoming at PS: Political Science & Politics.

Transcript

Rebecca: Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than Professor? For some, this may sound all too familiar. In this episode, we’ll discuss one study that explores bias in course evaluations.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.
Today our guest is Kristina Mitchell, a faculty member and director of the online education program for the Political Science Department at Texas Tech. In addition to research in international trade and globalization, Kristina has been investigating bias in student evaluations, motherhood and academia, women in leadership and academia, among other teaching and learning subjects. Welcome Kristina.

Kristina: Thank you.

John: Today our teas are?

Kristina: Diet coke. Yes, I’ve got a diet coke today.

[LAUGHTER]

Rebecca: At least you have something to drink. I have Prince of Wales tea.

John: …and I have pineapple ginger green tea.

John: Could you tell us a little bit about your instructional role at Texas Tech?

Kristina: Sure, so when I started at Texas Tech six years ago, I was just a Visiting Assistant Professor teaching a standard 2-2 load… so, two face-to-face courses in every semester, but our department was struggling with some issues in making sure that we could address the need for general education courses. So in the state of Texas every student graduating from a public university is required to take two semesters of government (we lovingly call it the “Political Science Professor Full Employment Act”) and so what ends up happening at a university like Texas Tech with pushing forty thousand students almost, is that we have about five thousand students every semester that need to take these courses… and, unless we’re going to teach them in the football stadium, it became really challenging to try and meet this demand. Students were struggling to even graduate on time, because they weren’t able to get into these courses. So, I was brought in and my role was to oversee an online program in which students would take their courses online asynchronously. They log in, complete the coursework on their own time (provided they meet the deadlines), and I’m in a supervisory role. My first semester doing this I was the instructor of record, I was managing all of the TAs, I was writing all the content, so I stayed really busy with that many students working all by myself. But now we have a team of people: a co-instructor, two course assistants, and lots of graduate students. So, I just kind of sit at the top of the umbrella, if you will, and handle the high level supervisory issues in these big courses.

John: Is it self-paced?

Kristina: It’s self-paced with deadlines, so the students can complete their work in the middle of the night, or in the daytime or whenever is most convenient for them, provided they meet the deadlines.

Rebecca: So, you’ve been working on some research on bias in faculty evaluations. What prompted this interest?

Kristina: What prompted this was my co-instructor, a couple of years ago, was a PhD student here at Texas Tech University and he was helping instruct these courses and handle some of those five thousand students… and as we were just anecdotally discussing our experiences in interacting with the students, we were just noticing that the kinds of emails he received were different. The kinds of things that students said or asked of him were different. They seemed to be a lot more likely to ask me for exceptions… to ask me to be sympathetic…. to be understanding of the student situation… and he just didn’t really seem to find that to be the case. So of course, as political scientists, our initial thought was: “we could test this.” We could actually look and see if this stands up to some more rigorous empirical evaluation, and so that’s what made us decide to dig into this a little deeper.

John: …and you had a nice sized sample there.

Kristina: We did. Right now, we have about 5000 students this semester. We looked at a set of those courses. We tried to choose the course sections that wouldn’t be characteristically different than the others. So, not the first one, and not the last one, because we thought maybe students who register first might be characteristically different than the students who register later. So, we took we chose a pretty good-sized sample out of our 5,000 students.

John: …and what did you find?

Kristina: So, we did our research in two parts. The first thing we looked at was the comments that we received. As I said, our anecdotal evidence really stemmed from the way students interacted with us and the way they talked to us. We wanted to be able to measure and do some content analysis of what the students said about us in their course evaluations. So, we looked at the formal in-class university-sponsored evaluation, where the students are asked to give a comment on their professors… and we looked at this for both our face-to-face courses that we teach and the online courses as well. And what we were looking for wasn’t whether they think he’s a good professor or a bad professor, because obviously if we were teaching different courses, there’s not really a way to compare a stats course that I was teaching to a comparative Western Europe course that he was teaching. All we were looking at was what are the themes? What kinds of things do they talk about when they’re talking about him versus talking about me? What kind of language do they use and we also did the same thing for informal comments and evaluation? So, you have probably heard of the website “Rate My Professors”?

John: Yes.

[LAUGHTER]

Kristina: Yes, everyone’s heard of that website and none of us like it very much… and let me tell you, reading through my “Rate My Professors” comments was probably one of the worst experiences that I’ve had as a faculty member, but it was really enlightening in the sense of seeing what kinds of things they were saying about me… and the way they were talking about me versus the way they were talking about him. So again, maybe he’s just a better professor than I am… so we weren’t looking for positive or negative. We were just looking at the content theme… and so the kinds of themes we looked at were: Does the student mention the professor’s personality? Do they say nice… or rude… or funny? Do they mention the professor’s appearance? Do they say ugly… pretty? Do they comment on what he or she is wearing? Do they talk about the competence, like how how well-qualified their professor is to teach this course and how do they refer to their professor? Do they call their professor a teacher? Or do they call their professor rightfully a professor? And these are the categories that we really noticed some statistically significant differences. So we found that my male co-author was more likely to get comments that talked about his competence and his qualification and he was much more likely to be called professor… which is interesting because at the time he was a graduate student. So, he didn’t have a doctorate yet… he wouldn’t really technically be considered a professor… and on the other hand when we looked at comments that students wrote about me, whether they were positive or negative… nice or mean comments… they talked about my personality. They talked about my appearance and they called me a teacher. So whether they were saying she’s a good teacher or a bad teacher… that’s how they chose to describe me.

Rebecca: That’s really fascinating. I also noticed, not just students having these conversations, but in the Chronicle article that you published, there was quite a discussion that followed up related to this topic as well, and in that there was a number of comments where women responded with empathetic responses and also encouraged some strategies to deal with the issues. But, then there was at least one very persistent person, who kept saying things like: “males also are victimized.” How do we make these conversations more productive and is there something about the anonymity of these environments that makes these comments more prevalent?

Kristina: I think that’s a really great question. I wish I had a full answer for you on how we could make conversations like this more productive. I definitely think that there’s a temptation for men who hear these experiences to almost take it personally… as though when I write this article, I’m telling men: “You have done something wrong…” when that’s not really the case… and, my co-author, as we were looking at these results about the comments and as we were reading each other’s comments, so we could code them for what kinds of themes we were observing… he was almost apologetic. He was like: “Wow, I haven’t done anything to deserve these different kinds of comments that I’m getting. You’re a perfectly nice woman, I don’t know why they’re saying things like this about you.” So, I think framing the conversation in terms of what steps can we take to help, because if I’m just talking about how terrible it is to get mean reviews on Rate My Professors, that’s not really giving a positive: “Here’s a thing that you can do to help me…” or “Here’s something that you can do to advocate for me.” So, I think a lot of times what men who are listening need… maybe they’re feeling helpless… maybe they’re feeling defensive…. What they need is a strategy. Something they can do going forward to help women who are experiencing these things.

Rebecca: I noticed that some of the comments in relationship to your Chronicle article indicated ways that minimize your authoritative role to avoid certain kinds of comments and I wonder if you had a response to that… and I think we don’t want to diminish our authoritative roles as faculty members, but I think that sometimes those are the strategies that we’re often encouraged to take.

Kristina: I agree, I definitely noticed that a lot of the response to how can we prevent this from happening got into “How can we shelter me from these students,” as opposed to “How can we teach these students to behave differently.” I definitely think the anonymous nature of student evaluation comments and Rate My Professors and internet comments in general. You definitely notice when you go to an internet comment section that anonymous comments tend to be the worst one. …and so the idea that what we’re observing, it’s not that an anonymous platform causes people to behave in sexist ways, It’s that there’s underlying sexism and the anonymous nature of these platforms just gives us a way to observe the underlying sexism that was already there. So the important thing is not to take away my role as the person in charge. The important thing is to teach students, and both men and women, that women are in positions of authority and that there’s a certain way to communicate professionally. Student evaluations can be helpful. I’ve had helpful comments that help me restructure my course. So, it’s a way to practice engaging professionally and learning to work with women. My students are going to work for women and with women for the rest of their lives. They need to learn, as college students, how to go about doing that.

John: Do you have any suggestions on how we could encourage that they’re part of the culture and in individual courses the impact we have is somewhat limited. What can we do to try to improve this?

Kristina: Well, I’ve definitely made the case previously to others on my campus and at other campuses that the sort of lip service approach to compliance with things like Title 9 isn’t enough. So, I don’t know if there at your institution there’s some sort of online Title 9 training, where you know…

John: Oh, yeah…

Kristina: …you watch a video

Rebecca: Yeah…

Kristina: … you watch a video… you click through the answers… it tells you: “are you a mandatory reporter?” and “what should you do in this situation?” …and I think a lot of people don’t really take that very seriously; it’s just viewed as something to get through so that the university cannot be sued in the case that something happens. So, I don’t think that that’s enough. I think that more cultural changes and widespread buy-in are a lot more important than making sure everyone takes their Title 9 training. So, in our work I mentioned that we did this in two parts, and the second part just looked at the ordinal evaluations. The 1 to 5 scale, 5 being the best… rank your professor how effective he or she is… and not only are students perhaps not very well qualified to evaluate pedagogical practices, but once again we found that even in these identical online courses, a man received higher ordinal evaluations than a woman did. And so what this tells me is in a campus culture we should stop focusing on using student evaluations in promotion and tenure, because they’re biased against women… and we should stop encouraging students to write anonymous comments on their evaluations. We should either make them non-anonymous or we should eliminate the comment section all together. Just because if we’re providing a platform it’s almost sanctioning this behavior. If we’re saying, “we value what you write in this comment,” then we’re almost telling students your sexist comment is okay and it’s valued and we’re going to read it… and that’s not a culture that’s going to foster positive environment for women.

John: Especially when the administration and department review committees use those evaluations as part of the promotion and tenure review process.

Kristina: Exactly. I mean when I think about the prospect of my department chair or my Dean reading through all the comments that I had to read through when I did this research, I’m pretty sure that he would get an idea of who I am as a faculty member that, to me…maybe I’m biased… but to me, is not very consistent with actually what happens in my classroom.

Rebecca: It’s interesting that anonymity.. right, we talk about anonymity providing more of a platform for this become present. But I’ve also had a number of colleagues share their own examples of hate speech and inappropriate sexual language when anonymity wasn’t a veil that they could hide behind, increasingly more recently. So I wonder, if your research shows any increase in this behavior and why?

Kristina: We haven’t really looked at this phenomenon over time. That’s just not something that we’ve been able to look at in our data, but I would like to continue to update this study. I definitely think that… current political climate is creating an atmosphere where perhaps people don’t feel that saying things that are racist or sexist are as shameful as they once perceived them to be. So there’s definitely a big stigma against identifying yourself as Nazi or even Nazi adjacent and that stigma, while it’s still there, the stigma against it seems to be lessening a little bit. I don’t know necessarily that I’ve seen an increase in what kinds of behavior I’m observing from my students, but I definitely will say that a student… an undergraduate student… gave me his number on his final exam this last semester like I was going to call him over the summer. So, it definitely happens in non-anonymous settings too.

John: Now there have been a lot of studies that have looked at the effect of gender on course evaluations, and all that I’ve seen so far find exactly the same type of results. That there’s a significant penalty for being female. One of those, if I remember correctly (and I think you referred to it in your paper), was a study where… it was a large online collection of online classes, where they changed the gender identity of the presenters randomly in different sections of the course, and they found very different types of responses and evaluations.

Kristina: Yes, that was definitely a study that that… I hate to say we tried to emulate because we were limited in what we could do in terms of manipulating the gender identity of the professor… but I think that their model is just one of the most airtight ways to test this. I agree, this is definitely something that’s been tested before. We’re not the first ones to come to this conclusion… I think our research design is really strong in terms of the identical nature of the online courses. At some point, I find myself… when I when I was talking about this research with a woman in political science who’s a colleague of mine… the question is how many times do we have to publish this before people are going to just believe us… that it’s the case. The response tends to be: “Well, maybe women are just worse professors or maybe there’s some artifacts in the data that is causing this statistically significant difference.” I don’t know how many times we have to publish it before before administrations and universities at large take notice… that this is a real phenomenon… that’s not just a random artifact of one institution or one discipline.

John: It seems to be remarkably robust across studies. So, what could institutions do to get around this problem? You mentioned the problem with relying on these for review. Would peer evaluation be better, or might there even be a similar bias there?

Kristina: I definitely think peer evaluation is an alternative that’s often presented, when we’re thinking of alternative ways to evaluate teaching effectiveness. Peer evaluation may be subject to the same biases. So, I don’t know that literature well enough off the top of my head, but I imagine that it could suffer from the same problems in terms of faculty members who are women… faculty members of color… faculty members with thick accents, with English that’s difficult to understand… might still be dinged on their peer evaluations. Although we would hope that people who are trained in pedagogy who’ve been teaching would be less subject to those biases. We could also think about self evaluation. Faculty members can generate portfolios that highlight their own experiences, and say here’s what I’m doing the classroom that makes me a good teacher… here are the undergraduate research projects I’ve sponsored… here the graduate students who’ve completed their doctoral degrees under my supervision… and that’s a way to let the faculty member take the lead in describing his or her own teaching. We could also just weight student evaluations. We know that women receive 0.4 points lower on a five-point scale, then we could just bump them up by 0.4. None of these solutions are ideal. But, I think some of the really sexist and misogynist problems in terms of receiving commentary, that is truly sexually objectifying female professors… that could be eliminated with almost any of these solutions. Peer evaluation… removing anonymous comments… self-evaluation…. and that’s really the piece that is the most dramatically effective in women being able to experience higher education in the same way that men do.

Rebecca: So, obviously if there’s this bias in evaluations then there’s likely to be the same bias within the classroom experience as well. We just don’t necessarily have an easy way of measuring that. But if you’re using teaching strategies that use dialogue and interactions with students rather than a “sage on the stage” methodology, I think that in some cases we make ourselves vulnerable and that does help teaching and learning, because it helps our students understand that we’re not you perfectly experts in everything… that we have to ask questions and investigate and learn things too… and that can be really valuable for students to see. But we also want to make sure that we don’t undermine our own authority in the classroom either. Do you have any strategies or ideas around around like that kind of in-class issue?

Kristina: Yeah, I think that the bias against women continues to exist just in a standard face-to-face class. One time, when I was teaching a game theory course, I was writing an equation on the board and it was the last three minutes of class and we’re trying to rush through you the first-order conditions and all sorts of things… and I had written the equation wrong, and as soon as my students left the classroom I looked at it and I went, “oh my gosh, I’ve written that incorrectly,” and so the next day when they came back to class, I I felt like I had two choices: we could either just move on and I could pretend like it never happened, or I could admit to them, that I taught this wrong… I wrote this wrong. So I did. I told them “Rip out the page from yesterday’s notes because that formula is wrong,” and I rewrote it on the board… and I got a specific comment in my evaluation, saying she doesn’t know what she’s talking about.. that she got that she got this thing wrong… and it was definitely something that, while I don’t have an experimental evidence that says that if a man does the same thing you won’t get penalized in the same way, to me it very much wrapped into that idea that women are are perceived as less qualified as men. So whether it’s because we’ll refer to as teachers or whether it’s because the student evaluations focused more on men’s competence, women are just seen as less likely to be qualified. How many times have you had a male TA and the students go up to the TA to ask questions about the course instead of you. So, I definitely think it’s difficult for women in the classroom to maintain that authority, while still acknowledging that they don’t know everything about everything No professor could. I mean we all think we do of course…. So, I think owning some of the fact that there are things you don’t know is important, no matter what your gender is, but I also try to prime my students I tell them about the research that I do. I tell them about the consistent studies in the literature that exists that shows that students are more likely to perceive and talk about women differently, because I hope that just making them aware that this is a potential issue, might adjust their thinking. So that if they start thinking “wow, my professor doesn’t know what she’s talking about” they might take a moment, and think “would I feel the same way if my professor were a man.”

Rebecca: I think that’s an interesting strategy. We found the similar kind of priming of students about evidence-based practices in the classroom works really well… and getting students to think differently about things that they might be resistant to… So, I could see how that that might work, but I wonder how often men do the same kind of priming on this particular topic.

Kristina: I don’t know. That would be an interesting next experiment to run if I were to do a treatment in two classes face-to-face classes and and you know do have a priming effect for a woman teaching a course versus a man and seeing if it had any kind of different effect. I think a lot of times men perhaps aren’t even aware that these issues exist. So, talking about the way that women experience teaching college in a different way… if men aren’t having this conversation in their classroom, it’s probably not because they’re thinking, “oh man, I really hope my female colleagues get bad evaluations so that they don’t get tenure.” It’s probably just because they aren’t really thinking about this as an issue… just because as a sort of white man in higher education you very much look like what professors have looked like for hundreds of years… and so it’s just a different experience, and perhaps something that men aren’t thinking about… and that’s why I’m getting the message out there so important because so many men want to help. They want to make things more equitable for women and I think when they’re made aware of it, and given some strategies to overcome it, they will. I’ve definitely found a lot of support in a lot of areas in my discipline.

John: …and things like your Chronicle article there’s a good place to start too… just making this more visible more frequently and making it harder for people to ignore.

Kristina: I agree. I think being able to speak out is really important, and I know sometimes women don’t want to speak out, either because they’re not in a position where they can or because they’re fearing backlash from speaking out. So, I think it’s on those of us who are in positions where we can speak up. I think it falls on us to try and say these things out loud, so that women who can’t… their voices are still heard.

John: Going back to the issue of creating teaching portfolios for faculty… that’s a good solution. Might it help if they can document the achievement of learning outcomes and so forth, so that that would free you from the potential of both student bias and perhaps peer bias. So that if you can show that your students are doing well compared to national norms or compared to others in the department, might that be a way of perhaps getting past some of these issues?

Kristina: I definitely think that’s a great place to start, especially in demonstrating what your strategies are to try and help your students achieve these learning outcomes. I always still worry about student level characteristics that are going to affect whether students can achieve learning outcomes or not. Students from disadvantaged backgrounds… students from underrepresented groups… students who don’t come to class or who don’t really care about being in class… these are all students who aren’t going to achieve the learning outcomes at the same rate as students who come to class… who are from privileged backgrounds… and so putting it on a professor alone to make sure students achieve those learning outcomes, still can suffer from some things that aren’t attributable to the professor’s behavior.

John: As long as that’s not correlated across sections, though, that should get swept out. As long as the classes are large enough to get reasonable power.

Kristina: Yeah, absolutely. I think it’s definitely it’s time for more evaluation into into how these measures are useful. I know there’s been a lot of articles in the New York Times op-ed, I think there was one in Inside Higher Ed, really questioning some of these assessment metrics. So, I think the time is now to really dig into these and figure out what they’re really measuring.

Rebecca: You’ve also been studying bias related to race and language, can you talk a little bit about this research?

Kristina: Yes, so this is a piggyback project after after I got finished with the gender bias paper, what I really wanted to do was get into race, gender, and accented English. Because I think not only women are suffering when we rely on student evaluations, it’s people of different racial and ethnic groups… it’s people whose English might be more difficult to understand. What we were able to do in this work is control for everything. So, we taught completely identical online courses the only difference we didn’t even I didn’t even allow the professors to interact with the students via email. I told them to make sure I… like Cyrano de Bergerac…writing all of their emails for them over a summer course and so they were handling the course level stuff just not the student facing things. They were teaching their online course but they weren’t directly interacting with the students in a way that wasn’t controlled… and the the faculty members recorded these welcome videos, which had their face… it had their English, whether it was accented or not… and I’m I asked some students who weren’t enrolled in the course to identify whether these faculty members were minorities and what their gender was. Because what’s important isn’t necessarily how the faculty member identifies – as a minority or not – as whether the students perceive them as minority… and even after controlling for all of that… controlling for everything… when everything was identical, I thought there was no way I was going to get any statistically significant results, and yet we did. So, we controlled even for the final grades in the course… even we controlled for how well students performed… the only significant predictor for those ordinal evaluation scores with whether the professor was a woman and whether the professor was a minority. We didn’t see accented English come up as significant, probably because it’s an online course. They’re just not listening to the faculty members more often than these introductory welcome videos. But we did when we asked students to identify the gender and the race of the professor’s based on a picture. We asked the student: “Do you think you would have a difficult time understanding this person’s English” and we found that Asian faculty members, without even hearing them speak, students very much thought that they would have difficulty understanding their English… and then we have a faculty member here who… blonde hair and blue eyes… but speaks with a very thick Hispanic accent, and the students who looked at his picture… none of them perceived that they would have a difficult time understanding his English. So, I think there’s a lot of biases on the part of students just based on what their professors look like and how they sound.

John: Can you think of any ways of redesigning course evaluations to get around this? Would it help if the evaluations were focused more on the specific activities that were done in class… in terms of providing frequent feedback… in terms of giving students multiple opportunities for expression? My guess is it prob ably wouldn’t make much of a difference.

Kristina: I think, as of now, the way our course evaluations here at Texas Tech University look is that they’re asked to rate their professors you know in a 1 to 5 on things like “did the professor provide adequate feedback?” and “was this course a valuable experience?” and” “was the professor effective?” and that gives an opportunity for a lot of: “I’m going to give five to this professor, but only fours to this professor” even when the behaviors in class might not have been dramatically different. Now this is also speculation, but maybe if there was more of a “yes/no,” “Did the professor provide feedback?” “Were there different kinds of assignment?” “Was class valuable?” Maybe that would be a way to get rid of those small nuances. Like I said, when we did our study, the difference was .4 out of a five-point scale, and so these differences aren’t maybe substantively hugely different. Maybe it’s a difference between you know a 4 and a 4.5. Substantively, that’s not very different. So, maybe if we offered students just a “yes/no,” “Were these basic expectations satisfied?” maybe that could help and that might be something that’s worth exploring. I definitely think that either removing the comment section altogether, or providing some very specific how-to guidelines on what kinds of comments should be provided. I think that that’s the way to address these open-ended say whatever you want… “are you mad? “…are you trying to ask your professor out? …trying to eliminate those comments would be the best way to make evaluations more useful.

John: You’re also working on a study of women in academic leadership. What are you finding?

Kristina: A very famous political science study, done by a woman named Jennifer Lawless, looked at the reasons why women choose not to run for office. So we know that women are underrepresented in elective office, you know the country’s over half women but, we’re definitely not seeing half of our legislative bodies filled with women. What the Lawless and Fox study finds, is not that women can’t win when they run, it’s just that women don’t perceive that they’re qualified to run at all. So, when you ask men, do you think you’re qualified to run for office, men are a lot more likely to say: “oh yeah, totally… I could I could be a Congressman,” whereas women, even with the same kind of qualifications, they’re less likely to perceive themselves as qualified. So, what my co-author Jared Perkins at Cal State Long Beach and I decided to do, is see whether this phenomenon is the same in higher education leadership positions. So one thing that’s often stated is that the best way to ensure that women are treated equally in higher education, is just to put more women in positions of leadership… that we can do all the Title 9 trainings in the world, but until more women are in positions of leadership, we’re not going to see real change…. and we wanted to find out why we haven’t seen that. So you know 56 percent of college students right now are women, but when we’re looking at R1 institutions only about 25% of those university presidents are women, and then the numbers can definitely get worse depending on what subset of universities you’re looking at. We did a very small pilot study of three different institutions across the country. We looked at an R1 and R2 and an R3 Carnegie classification institution. Our pilot study was small, but our initial findings seem to show that that women are not being encouraged to hold these offices at the same rate as men are. So what we saw was that… we asked men “have you ever held an administrative position at a university?” About 60% of the men reported that they had, and about 27% of women reported that they had, and we also asked “Did you ever apply for an administrative position? …and only 21% of the men said that they had applied for an administrative position, while 27% of women said they had applied. Off course it could be that they misunderstood the question… that maybe they thought we meant “Did you apply and not get it?” but we also think that there may be something to explore when it comes to when women apply for these positions they get them. There are qualified women ready to go and ready to apply, but men may be asked to take positions… encouraged to take positions… or appointed to positions where there might be opportunities to say: “There’s a qualified woman. Let’s ask her to serve in this position instead.”

John: That’s not an uncommon result. I know in studies and labor markets starting salaries are often comparable, but women are less likely to be promoted and some studies have suggested that one factor is that women are less likely to apply for higher level positions. Actually, there’s even more evidence that suggests that women are less likely to apply for promotions, higher pay, etc. and that may be at least a common factor that we’re seeing in lots of areas.

Kristina: Absolutely. I definitely think that University administrations need to place a priority on encouraging women to apply for grants, awards, positions, and leadership because there are plenty of qualified women out there, we just need to make sure that they’re actively being encouraged to take these roles.

Rebecca: Which leads us nicely to the motherhood penalty. I know you’re also doing some research in this area about being a mother and in academia, can you talk a little bit about how this impacts some of the other things that you’ve been looking at?

Kristina: Absolutely. The idea to study the motherhood penalty in academia stemmed from reading some of those “Rate My Professor” comments. Because at my institution, we didn’t have a maternity leave policy in place… so I came back to work after two weeks of having my child and I brought him to work. So my department was supportive. I just brought him into my office and worked with the baby for the whole semester… and it was difficult, it was definitely a challenge to try and do any kind of work while a baby is, in the sling, in front of your chest… but one of my “Rate My Professor” evaluations from the semester that I had my son, mentioned that I was on pregnancy leave the whole semester and I was no help. And so this offended me to my core, having been a woman who took two weeks of maternity leave before coming back to work… because I didn’t… I wasn’t on maternity leave the whole semester, and in addition… if I had been, what kind of reason is that to ding a professor on her evaluation? Like she birthed a human child and is having to take care of that child… that shouldn’t ever be something that comes up in a student comment about whether the professor was effective or not.

So what we want to look at are just the ways in which women are penalized when they have children. Even just anecdotally, and our data collection is very much in its initial stages on this project… but as we think through our anecdotal experiences, when department schedule meetings at 3:30 or 4:00 p.m., if women are acting as the primary caregiver for their children (which they often are) this disadvantages them because they’re not able to be there. You have to choose whether to meet your child at the bus stop or to go to this department meeting… or networking opportunities, are often difficult for women to attend if they’re responsible for childcare. Conferences have explored the idea of having childcare available for parents because, a lot of times, new mothers are just not able to attend these academic conferences… which are an important part of networking and most disciplines… because they can’t get childcare. So at the Southern Political Science Association meeting that I went to in January, a woman brought her baby and was on a panel with her baby. So, I think we’re making good strides in making sure mothers are included, but what we want to explore is whether student evaluations will reflect differences in whether they know that their professor is a mother or whether they don’t. So, how would students react if in one class I just said I was cancelling office hours without giving a reason and then in another class, I said it was because I had a sick child or I had to take my child to an event. That’s kind of where we’re going with this project and we really, really hope to dig into what’s the relationship between the motherhood penalty and student evaluation.

Rebecca: Given all of the research that you’re doing and the things that you’re looking at, how do we start to change the culture of institutions?

Kristina: Well, I’m thinking that we’re on the right direction. Like I said, I see a lot more opportunities at conferences for childcare and for women to just bring their children. I see a lot of men who are standing up and saying, “hey, I can help, I’m in a position of power and I can help with this” and what, you know, without our male allies helping us, I mean, men had to give women the right to vote, we didn’t just get that on our own. So, we really count on allies to put us forward for awards. One thing, I think, that’s an important distinction that I learned about from a keynote speaker is the difference between mentoring and sponsoring. So, mentoring is a great activity, we all need a mentor, someone we can go to for advice, someone we can ask for help, someone who can guide us through our professional lives. But what women really need is a sponsor, someone who will publicly advocate for a woman whether that’s putting her in front of the Dean and saying, “Look at the great work she’s doing” or whether it’s writing a letter of recommendation saying, “This woman needs to be considered for this promotion or for this grant.” Sponsorship, I think, is the next step in making sure that women are supported. A mentor might advise a woman on whether she should miss that meeting or that networking opportunity to be with her child. A sponsor would email and say, “we need to change the time because the women in our department can’t come. because they have events that they need to be with their children.”

John: A similar article appeared in a Chronicle post in late February or maybe the first week in March by Michelle Miller where she made a slightly different version. Mentoring is really good… and we need mentors, but she suggested that sometimes having fans would be helpful. People who would just help share information… so when you do something good… people who will post it on social networks and share it widely in addition to the usual mentoring role. So, having those types of connections can be helpful and certainly sponsors would be a good way of doing this.

Rebecca: I’ve been seeing the same kind of research and strategies being promoted in the tech industry, which I’m a part of as well. So, I think it’s a strategy that a lot of women are advocating for and their allies are advocating for it as well. So hopefully we’ll see more of that.

Kristina: I think the idea of fans and someone to just share your work is hugely important. I have to put in a plug for the amazing group: “Women Also Know Stuff.”

Rebecca: Awesome.

Kristina: It’s a political science specific website, but there are many offshoots in many different disciplines and really it’s just the chance that, if you say, “I need to figure out somebody who knows something about international trade wars.” Well, you can go to this website and find a woman who knows something about this, so that you’re not stuck with the same faces… the same male faces,,, that are telling you about current events. So “Women Also Know Stuff” is a great place. They share all kinds of research and they just provide a place that you can look for an expert in a field who is a woman. I promise they exist.

Rebecca: I’ve been using Twitter to do some of the same kind of collection. There might be topics that I teach that I’m not necessarily familiar with… scholars who are not white men… And so, put a plug out like, “hey, I need information on this particular subject. Who are the people you turn to who are not?”

John: You just did that not too long ago.

Rebecca: Yeah, and it, you know, I got a giant list and it was really helpful.

John: One thing that may help alleviate this a little bit is now we have so many better tools for virtual participation. So, if there are events in departments that have to be later, there’s no reason why someone couldn’t participate virtually from home while taking care of a child, whether it’s a male or female. Disproportionately, it tends to be females doing that but you could be sitting there with a child on your lap, participating in the meeting, turning a microphone on and off, depending on the noise level at home, and that should help… or at least potentially, it offers a capability of reducing this.

Rebecca: I know someone who did a workshop like that this winter.

John: Just this winter, Rebecca was doing some workshops where she had to be home with her daughter who wasn’t feeling well and she still came in, virtually, and gave the workshops and it worked really well.

Kristina: Yeah, I definitely think that that’s a great way to make sure that that everyone’s included, whether it’s because they’re mothers or fathers or just unavailable… and I think that’s where we look to sponsors… the department chairs… department leadership to say, “This is how we’re going to include this person in thid activity” rather than it being left up to the woman herself to try and find a way to be included. We need to look to put people in positions of leadership to actively find ways to include people regardless of their family status or their gender.

Rebecca: This has been a really great discussion, some really helpful resources and great information to share with our colleagues across all the places that…

John: …everywhere that people happen to listen… and you’re doing some fascinating research and I’m going to keep following it as these things come out.

Rebecca: …and, of course, we always end asking what are you gonna do next. You have so many things already on the agenda but what’s next?

Kristina: So next up on my list is an article that’s currently under review that looks at the “leaky pipeline.” So the leaky pipeline is a phenomenon in which women, like we were saying, start at the same position as men do, but then they fall out of the tenure track, they fall out of academia more generally… they end up with lower salaries and lower position. So, we’re looking at what factors, what administrative responsibilities, might lead women to fall off the tenure track. We already know that women do a lot more service work and a lot more committee work than men do, so we’re specifically looking at some other administrative responsibilities that we think might contribute to that leaky pipeline.

Rebecca: Sounds great. Keep everyone posted when that comes out and we’ll share it out when it’s available.

Kristina: Thanks.

John: …and we will share in the show notes links to papers that you published and working papers and anything else you’d like us to share related to this. Okay, well thank you.

Kristina: Thank you.
[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.