54. SOTL

As faculty, we face a tradeoff between spending time on  teaching and on research activities. In this episode, Dr. Regan Gurung joins us to explore how engaging in research on teaching and learning can help us become more productive as scholars and as educators while also improving student learning outcomes.  Regan is the Ben J. and Joyce Rosenberg Professor of Human Development in Psychology at the University of Wisconsin at Green Bay; President-Elect of the Psi Chi International Honor Society in Psychology; co-editor of Scholarship of Teaching and Learning in Psychology; co-chair of the American Psychological Association Introductory Psychology Initiative and the Director of the Hub for Intro Psych and Pedagogical Research.

Show Notes

Show Notes

John: As faculty, we face a tradeoff between spending time on teaching and on research activities. In this episode, we explore how engaging in research on teaching and learning can help us become more productive as scholars and as educators while also improving student learning outcomes.

[MUSIC]

Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guest today is Dr. Regan Gurung, the Ben J. and Joyce Rosenberg Professor of Human Development in Psychology at the University of Wisconsin at Green Bay; President-Elect of the Psi Chi International Honor Society in Psychology; co-editor of Scholarship of Teaching and Learning in Psychology; co-chair of the American Psychological Association Introductory Psychology Initiative and the Director of the Hub for Intro Psych and Pedagogical Research. Welcome.

John: Welcome.

Regan: Thanks a lot, Rebecca and John.

John: Our teas today are…

Rebecca: I’m drinking Prince of Wales today.

Regan: Alright.

John: I’m drinking ginger tea.

Regan: Ooh, now you’re making me want to. [LAUGHTER]

John: We’ve invited you here today to talk about research in the scholarship of teaching and learning, or SOTL. You’ve conducted a lot of research on teaching and learning as well as research within your discipline. In most disciplines there has been an increase in the journals devoted to teaching and learning and an increase in research in teaching and learning, but it hasn’t reached everywhere yet. SOTL research is often not discussed in graduate programs and is sometimes devalued by campus colleagues. Why does that occur?

Regan: So. I think there are multiple reasons why the—and I’m going to start with the devaluing. I think there’s a lot of uncertainty about what it exactly it is, so on one hand, when people say a scholarship of teaching and learning… very often if it’s somebody who hasn’t really read up on it recently the sense is, oh, you know, that’s research on teaching; that’s not as good as your regular research. Now, I think that’s a misperception and once upon a time, and here I mean maybe even 15 years ago, there was some scholarship of teaching and learning that wasn’t done very well and I think people have heard about that in the past and that’s why there’s that knee-jerk reaction. Far too often it’s seen as something where it’s not as rigorous, perhaps, or it’s not done in the same way and most of that is wrong. What I like to tell folks who see that is, if you think the scholarship of teaching and learning is not rigorous, well, you haven’t tried to submit something to a journal recently. I co-edit a journal on the scholarship of teaching and learning in psychology and I can actually see some people submit poor work and I send it right back; I do the classic desk rejection and I say, look, this is just not good enough. So my favorite tip for “How do you write for a scholarship of teaching journal?” is very simple: just like you write anything else. There’s a lot of baggage, but I think that as you alluded to, John, it has changed more recently and I think part of what you notice now or what I’ve been seeing is that this kind of work, this kind of examination is being called different things. For example, a term that I’m hearing more and more often is DBER: disciplinary based educational research. And I’m hearing this come out of medical schools and engineering schools and social work schools and many professional programs where they’re doing DBER, which is essentially what the scholarship of teaching and learning is. So, I think because of that baggage with the term, people are calling it different things but in general the work is getting much more rigorous.

John: Excellent, and if changing the name is sufficient to do that, it’s a valuable step.

Regan: I think that’s why, when I talk about it I like to talk about it as: “Do you want to know if your students are learning? Do you want to know if your teaching is effective?” Well, then you should do some research on it. You can call it what you want. I started really calling it pedagogical research because that’s what it was, but it’s truly a rose by any name.

John: And that’s something that Carl Wieman has emphasized.

Regan: Asolutely, yup.

John: In the sciences, you test hypotheses and there’s no reason we couldn’t do the same thing in our teaching.

Regan: Exactly.

John: And that’s starting to happen, or it’s happening more and more.

Rebecca: In some disciplines, the scholarship of teaching and learning is not accepted as being part of their tenure and promotion file, for example. What would you recommend faculty do in a department like that if they really want to get started in SOTL?

Regan: Well, so, Rebecca, let me take you a half step back.

Rebecca: Yeah.

Regan: When you say “in some disciplines it isn’t as accepted.” What has surprised me is that most disciplines have actually been doing the scholarship of teaching and learning and publishing it for the longest time. I mean, if you take a look at chemistry, it goes back, gosh, seventy years or so. Almost every discipline out there has a journal that publishes the scholarship of teaching and learning, but, and here’s the big but: most of us in our normal training never run into it. So, I’ll take my own case. In psychology, the Teaching of Psychology Journal has been around for 46 years, yet all through grad school, all through my post-doc I never even knew the journal existed. Why? Because the programs that I went through weren’t focused on teaching the individuals—wonderful as they may be—who I worked with didn’t do that kind of work, so they didn’t know about it. So I think that’s a really important fine-tune there: there is a journal in almost every discipline—almost every discipline—for the scholarship of teaching and learning. So, it’s just a question of discovering it… it’s a question of finding it. Now, that said, where can they start? I think I can answer your question from a conceptual level and from a practical level, so I’ll start with the practical. The easiest place to start, there are lots of compilations of how to do it. For example, I think both of you have my website. On my website I have a simple tab called SOTL. On that tab is a list of places to get going, and I’ve organized it so that there’s a brief introduction to SOTL, there are journals, there are resources, there are little handouts. So, if a faculty member has even ten minutes, go to my website, hit SOTL, scroll through. That’s the more practical, that’s the easiest way to get started. From a conceptual standpoint it really starts with the question, what aspect of your teaching or your student learning are you curious about? John, I know you do some work in large-class instruction in economics. Why is this assignment not working? Can I get my students to remember certain concepts better if I change how I present information? It starts with a question. And you don’t have to read anything, you don’t have to look at any manual. If you look at your class and you go, “Hmmm, why isn’t this working, or why isn’t that working?” That’s where it begins, and from there you follow the same route that we always do: go look at what’s been published in it, fine-tune your question, design, think about what do you want to change and so on and so forth. I think it’ll help if I give you my working definition of the scholarship of teaching and learning, and when I think about it I think of SOTL as encompassing those theoretical underpinnings of how we learn. And more specifically, I see it as the intentional and systematic modifications of pedagogy and here’s the important part: the assessment of the resulting changes in learning. So that’s the key: you intentionally, you systematically, modify what you’re doing and then you measure whether it worked or not. That’s it. I could say that nonchalantly. There’s a technique , there’s a robustness to it, but at the heart, where do you start? You start by asking the question.

Rebecca: I think one of the things that I hear you saying is not much different than someone has a really reflective teaching practice—they’re doing it but not in that systematic way?

Regan:Yeah, absolutely right. There’s a term called scholarly teaching and in this kind of literature there’s a distinction made between scholarly teaching and the scholarship of teaching and learning, and all the distinction is is that scholarly teacher is reflecting on their work and then you’re right, you’re absolutely right; making those intentional systemic changes. That’s scholarly teaching. When it becomes the scholarship of teaching and learning is when you present it or you publish it, preferably through peer-reviewed ways, but you’re absolutely right; at the heart of it it’s scholarly teaching. It’s reflective intentional systematic changes.

John: One of the barriers, that people who are considering doing research in the scholarship of teaching and learning, is going through IRB approval, and in many disciplines that’s something they haven’t experienced before. It’s common in psychology. It’s less common in economics and perhaps in art.

Rebecca: It doesn’t exist in design. [LAUGHTER]

John: Could you tell us a little bit about that process?

Regan: Sure. Every university has an institutional review board and essentially what that board does is it’s in place to make sure that any research that’s being done isn’t harmful. Now, normally when we think about harmful we think about a drug or a food substance being tested, but here it just means any research that’s being done, and so when you do the scholarship of teaching and learning or when you’re examining your classes, yes, you could just look at your exams and see if exam scores are changing, but, if you do want to publish that, if you do want to share that, you really should go through institutional review board review. Now, the key thing here: it does sound like this whole new world, and it is, but at the heart of it is a very simple process. Now, there are three levels of review and I think knowing about the levels helps. For example, the first level is called an exempt review. The next level is called an expedited review, and the third level is called a full board review. I don’t think I’ve run into scholarship of teaching and learning that has gone through a full board review, because we’re not doing things that are more than minimum level of stress. Now when you say, hey, hang on, I didn’t know they were stress involved. Well, anytime you ask anybody to fill out a survey, there’s a minimal level of stress. And when you’re asking your students to reflect on their learning, well that’s a minimum level of stress. Every university has its own procedure. SUNY Oswego probably has a forum online. It’s a short forum; you’re basically telling this board what you plan on doing, what you plan on doing with the information, and most importantly, in these kind of cases, you are letting the board know whether or not students will be put under duress. What the IRB is going to look for is are you the instructor in some way forcing your students to do things that normally wouldn’t be done in the normal course of the educational process. But at the heart of it, all you’re doing is you’re sharing with this board whether or not you can do it and most scholarship of teaching and learning is at that exempt level. That exempt level essentially translates to exempt from further review. It doesn’t mean exempt from being reviewed; it just means this is mundane and low stress enough that it’s exempt from further review. Now that second level, expedited. If you do want to measure or keep track of names, if you want to look at how certain names relate to scores down the line—and that’s actually some really key research—well that’s expedited review. Now, even there it’s reviewed by one person. Both the expedited and the exempt review are reviewed by one person, often the chair. It often takes no longer than a week, and by doing that you just know that all your t’s are crossed and your i’s are dotted and it’s the ethical thing to do. So, whenever people say: “Oh, this is really mundane and I’m not really doing much more than just measuring student learning,” I still sa y if there’s any chance you want to present it or publish it make sure you go through the IRB.

John: And many journals will require evidence of completion of the IRB process.

Regan: Oh, absolutely. The moment you want to publish it you have to sign off saying that you got IRB review..

John: We do use an expedited review process on our campus. I was going to say, though, that we’re recording this a bit early because we’ve recorded a few things in advance, so we’re recording this in late October, but just yesterday I read that Rice University has introduced a streamlined expedited review process or IRB and apparently that’s something that’s been happening at more and more campuses. Are you familiar with that?

Regan: You know, not as much, because right now there’s so much up in the air with the IRB because national guidelines are changing. They were supposed to have changed in January, then it was moved to July. The latest I heard is it’s moved to next January. So, for the most part actual regulations are changing. Even on our own campus we switch from one form of human subjects training to another form, but this so called short-form expedited process will definitely help. That said, even the regular expedited, it’s a very easy process and I think the neat thing about this—and I tell students this when I’m teaching research methods, too—as the instructor or the researcher, just going through that IRB form really reminds you of some key things that you may have otherwise forgotten about, so, yes.

Rebecca: Do you talk a little bit about your own research to give people an overview of what project might look like from the beginning to the end?

Regan: Sure. What really got me interested in this is I teach large introductory psychology classes, the class is 250 individuals and I was struck by how when publisher reps come into my office and try to convince me to adopt one book over the other they would talk about the pedagogical aids in the textbook; “oh, look, our book has this and our book has that.” And that really got me started studying textbooks and how students use textbooks. So the umbrella under which I do research is student studying: What’s the optimal way for students to study? …and I use both a social psychology and a cognitive psychology lens or approach to it and it really started with looking at how they use textbook pedagogical aids. So, for example, in one of my really first studies I measured which of the different aids in a textbook the student uses and then I used their usage to predict their exam scores. Now, what I found, and this is what really surprised me and got me doing this even more, is that even those students were using and focusing on key terms a lot. Now, mind you, I’ll take a half step back—you may not be surprised to know that students use bold terms, they use italics, that’s what they focus a lot on. But students in my study also said that they use key terms a lot. Now if you’re studying key terms that should be good. If you’re making flashcards and studying those key terms that should be good, but what I found is that the more students use key terms the worse their exam scores. There was this negative correlation and that’s completely counterintuitive. Why would they go the opposite direction? So, I dug into it some more and I realized that students spend so much time on key terms or so much time on flashcards that they’re not studying in any other way. So even though they’re using flashcards, they’re so intent on memorizing and surface-level processing that they’re not doing deeper level processing. So, that was some years ago and I’ve been building on that, trying to unpack how students study. My most recent study… that’s actually under review right now… a colleague, Kate Burns, and I took two of the most recommended cognitive psychology study techniques, which is repeated practice or testing yourself frequently and spacing out your practice or spacing out your studying, and we took both of these and across nine different campuses divided up classes such that the students in those classes were either using high or low levels of each of these. So, in one study across multiple campuses we tested is there a main effect of one of these types of studying or is there an interaction? And what we found is that there is an interaction and the critical component seems to be spacing out your studying. Not so much even repeating your studying, but really spacing out your studying, and I think what’s interesting here is the reason this is happening is the students who said that they were testing themselves repeatedly, that sounds great, and if you’re a cognitive psychologist you say, hey, the lab says repeat testing is great; the problem is in the classroom a lot of students who were repeatedly testing themselves were repeatedly testing themselves during a really short period of time.

John: Right, I’ve seen that myself.

Regan: And I think that’s the issue, but because we had both these factors in the study, we could actually tease that out. So that’s the kind of work that I do… is take a look at what the cognitive lab says is important; let’s see how it works in the actual classroom.

John: Now was this a controlled experiment? Or was this based on the students’ behavior?

Regan: So, yes and no, okay. [LAUGHTER] I love this study because of a number of reasons. Number one, we tested two different techniques in the same thing. Number two, we did it at multiple institutions, so it’s not just my classroom. A lot of SOTL is one class. So, here we went beyond to try and generalize. But, to get to your question, we actually used a true experimental design. So we recruited these different campuses and we assigned a classroom. So, for example, I’d say, “Hey John, thanks for taking part. If you can have your students do high repetition and high spacing?” “Hey Rebecca, thanks for taking part. Could you have your students do high repetition and low spacing?” And that’s how we spread it out. We had about two campuses in each of these cells. That’s the true experiment on paper. To get to the other part of what you said… in reality, that’s not exactly what students always did. And you know students; we can tell them to do something but a whole bunch of things gets in the way. Fortunately, of course, we measured self reports of what students said they actually did and it was relatively close to the study cells, but even though it varied a little bit we could still control for it. So, yes, it was close to a controlled study as much as you could control something in the real world across nine campuses.

John: That brings us to the general question of how you construct controls. Suppose that you make a change in your class; how do you get the counterfactual?

Regan: Right.

John: What would be some examples for people designing an experiment?

Regan: The word control, especially in research, has the true connotation of the word control group and that’s controlling for factors as different from having a control group. Optimally we’d love a control group. The problem with the control group is that it means no treatment. So, very often a true control group means this group of students is not getting something. From a philosophical and an ethical standpoint, I don’t like the notion of one group not getting something. So, the word I like to use is comparison group. So, your question still holds, but what’s the comparison group? I think here’s where if you’re fortunate enough to teach multiple sections, well one of the sections can be the comparison group. If you’re not fortunate enough to have multiple sections, you compare the students this semester with the students the last semester when you weren’t doing that new, funky innovation. So, there are a bunch of different ways to gather the comparison group, but you’re absolutely right: having a comparison group is important. Most commonly in scholarship of teaching and learning, the comparison is the students before that intervention, so it’s a classic pre- and post- measure. I’ll give you this quiz before I’ve introduced the material, I give you an equivalent quiz after, let’s see if there are changes in learning. And that’s the most common comparison; you’re comparing them with them before but optimally again you want a different section, you want a group of students, a different semester, or so on, and so on.

John: And it’s best if you have some other controls…

Regan: Absolutely.

John: for student ability and characteristics.

Regan: You nailed one of the key—my two favorite are effort and ability. As much as possible, measure their GPA. If they’re first-year students, measure their high school ACT scores or their high school GPA and then you have to measure ability, and I think those two are probably the usual suspects for control. And again, a lot of SOTL doesn’t do that and it should.

Rebecca: I think one thing that comes up a lot for me (and maybe some others who are in disciplines maybe more similar to my own) is that the kind of research that we do is not this kind of research generally, but we’re really interested in what’s happening in our classrooms. So, for faculty who might be in the arts or some other area where we’re doing really different kinds of research, how would you recommend being able to partner or do this kind of work without that background?

Regan: And I think implicit in your question is the “Do I need to have a certain methodological tool bag?” and I remember I was at a conference once and somebody accosted me and said “Hey, is it true that you have to be a social scientist to do this work?” And the answer is no, and I wrote a pretty funky essay called “Get Foxy,” which is how social scientists can benefit from the methodologies of the humanists and vice versa. But, you’re right; you can collaborate if you need to do that kind of work, but there are a lot of questions even within your discipline… and when I think about SOTL I think about answering questions about teaching and learning with the tools of your discipline. Now, I’ll give you an example: a good friend of mine was an art and her project, or something that she wanted to dig into, was to improve student critiques in an art class. Here we have students learning how to do art (and I think it was drawing or jewelry making) and across the course of the semester everybody had to present their work and then critique each other’s work… and those critiques, they just didn’t have the teeth that she wanted them to, so she was giving them skills and how to do it. So here’s a case of how did she know whether or not the critiquing tools were increasing? Well, she came up with a simple rubric and to score them against and look at if the scores changed. Now, you may say, well, we very often in the arts and theater you don’t get skills to do that, which is true, but that’s where I think collaboration comes in and that’s why what’s really neat about scholarship of teaching and learning is very often there are class collaborations. I have a historian on my campus who wanted to change the quality of his essays and he and David Voelker changed how he was teaching and wanted to see it roll out and had students on their essays use teams in a different way. Well, he compared, and John this goes back to your point, he compared essays from before the change with essays from after the change, counted up the number of teams students had and then, Rebecca, to your point went over to my colleague in psychology and said, hey, can you tell me if this is statistically different. So, he didn’t even bother with doing the stats; he just said, “Hey look, I don’t need to do the stats.” But you can, in a click, and literally within minutes my colleague in psychology had done the stats for him. I think that’s the kind of stuff that can happen to truly get at those answers if you go, “You know, I don’t know how to do that.” But, you’d be surprised… the basic skills for SOTL can give you enough to test questions pretty well.

Rebecca: I think John and I have also found in the teaching center that it’s really exciting when faculty from different disciplines start talking about their research when they’re looking at learning because there’s things that we can learn from each other and the more that we’re talking across disciplines can be really valuable as well.

Regan: Right, and I think this is where reading the rich literature that exists in your discipline or even across disciplines on scholarship on teaching and learning really gives you the leg up, because I find now when I do workshops and somebody says, “You know, I’ve got this question; I don’t know how to start.” More often than not it’ll remind me of a study that I can say, hey, here’s what you can do. And it’s just because I read a lot and I’ve got all that in my head and I just matched to that question and it’s pretty easy. I mean, very rarely do we have to invent something from scratch. We go, “Hey, yeah, you know what? Here’s the study that’s pretty close to the question you have, let’s use that methodology.”

Rebecca: So, how do we build a culture of the scholarship of teaching and learning—the departments who might have faculty who are resistant to the idea of their colleagues spending their time doing that? How do we start changing minds and really building a culture that embraces the idea of the scholarship of teaching and learning?

Regan: Well, I think you’ve got to attack it from two different levels. You definitely want a champion in the administration who is educated enough about the scholarship of teaching and learning and how it can be done robustly. If you can convince somebody of it’s worth and then if you go “How do you do that?” …well that’s where you need to make sure you have at your fingertips, as a teaching and learning center, the exemplars of really robust work… and I think if you have that really robust work at your fingertips, that’s definitely a key place to start. One of my favorite examples along those lines of trying to convince (especially administrators) about the worth of scholarship of teaching and learning, I recommend a 2011 publication by Hutchings, Huber, and Ciccone, it’s called A Scholarship of Teaching and Learning Reconsidered and this 2011 publication is a great collection. It does your homework for you. That one book pulls together evidence for why scholarship of teaching and learning helps students, helps faculty, helps institutions. So that’s where the top down—get your administrators to check that book out and go, “Oh yeah, look, there is actually some good research.” Coming at it from the other angle—I know this for a fact—there are people on your campus doing some of that work, but often they may be isolated, they may be a small group. You want to strengthen them so that they can spread that to their circles, and that’s really how it starts. On my campus, when Scott was the Dean at Green Bay, we did a lot to develop scholarship of teaching and learning through the teaching center. There was one year where we had 14 faculty who got together every month and talked about their projects. Now you may say, well, that’s 14 and you had 160 faculty. You know what, you do 10 of working every year and colleagues see the value of the work those 10 or 14 are doing, pretty soon you’re gonna have a culture where people recognize it more and appreciate it more. So I think that’s how it goes… you put your efforts on those people who are already doing it to make them stronger and that’s gonna spill over and pretty soon you’re gonna win over folks.

John: We generally had support from the upper administration and there’s often been a lot of faculty who are new, interested in doing it; it’s usually the promotions and tenure committees that have served as a barrier in some departments, but we’ll work on that and we need to keep working on that.

Regan: Well, just along those lines on our campus we felt so strongly about the scholarship of teaching and learning that the Faculty Senate actually passed a resolution recognizing the importance of scholarship of teaching and learning. Now again, it still gave department chairs some leeway, but at least the faculty voted on it as something that the university values and that goes a really long way to having especially junior faculty say, you know, I can do this.

Rebecca: Certainly makes faculty, especially junior faculty, feel supported when the Senate is saying, “Yes, we believe in this” and it’s not just one person saying we don’t.

Regan: Absolutely. And they’ll be naysayers. We started off this conversation with “There are people out there who think it’s not good enough” and there are people out there but I’ve had conversations with such people on my campus where sharing some information, sharing things about how it’s done goes a long way towards changing minds.

John: In my department, it’s helped that I’ve been the chair of our search committee for a few decades now. We’ve generally hired people who are interested in this, but that’s not the case in all of our departments yet, but we’re hoping that’ll change. For those who have small classes or may not be interested in doing research in their own classes, one other option is meta-analysis. Could you talk a little bit about that?

Regan: So meta-analysis, where one study is taking a look at a lot of different studies, there is the mother of all meta analyses… is one that we should talk about because I think the interested person can run to it. John Hattie, now at the University of Melbourne, did a meta-analysis where actually he did a meta-meta-analysis; took 900 meta analyses and then synthesized the data from those 900 studies that had already synthesized data, and the reason I like talking about that is the sample size when you take all those 900 meta analyses is a quarter of a billion with a “b”; that’s a lot of data points, it’s a lot of students. And what’s neat about meta analyses is that instead of just being one study at one place it’s now multiple studies over multiple contexts, and if you can find an effect over multiple contexts, that’s really saying something because a lot of single studies are so geared into the local context of where that place is that if you run into a meta analysis, so even if anybody listening pulls up an educational journal or an SOTL journal and sees meta analysis in the title, I would spend more time reading that one because it’s gonna be more likely to generalize from that. So, I think it’s statistical and methodological advances now mean that there are more meta analyses around and more meta, meta analyses around as well.

Rebecca: As an advocate for the scholarship of teaching and learning, where do you hope the scholarship of teaching and learning goes in the next five years?

Regan: Honestly, I think it should be a part of every teacher’s repertoire. When I think about a model teacher, and it’s not just when I think about it—I’ve published on evidence-based college and university teaching and when my co-authors and I looked at all the evidence out there and what makes a successful university teacher… one of those components, and we found six… I mean, it wasn’t just student evaluations, no, it was your syllabi, it was your course design, but one big element was doing the scholarship of teaching and learning… and to answer your question, I think if in five years from now we can see it be part of teacher training to look at your class with that intentional systematic lens, I think that’s where the field needs to get to.

John: At the very least it would get people to start considering evidence-based teaching practices instead of just replicating whatever was done to them in graduate school.

Regan: Absolutely. People would be surprised at how much good SOTL there is out there, and I always like sending folks to the Kennesaw State Center for Teaching and Learning where they have a list of journals in SOTL in essentially every field. You will scroll through that list for ages and it is just mind-boggling to realize that, “Wow, SOTL has been going on for a very long time.” And Rebecca, you mentioned art and performance arts and theater and music… not as much, but even there there is a fair amount and I think it’s just a question of getting folks making those resources more available to individuals and that’s why whenever I interact with teaching and learning centers I have a short list of key resources to look at. And again, that’s on my SOTL link. But, even that small list is an eye-opener to most people who never knew this existed, and I think once they realize it’s there they will start seeing it everywhere and once you start doing it it really energizes you. For those of us who’ve been teaching for 20-plus years to look at our classes with that new eye of how can I change something, how can I make it better and then seeing the positive effects of those changes, that’s invigorating.

Rebecca: I’m energized after having this conversation.

Regan: It is good stuff.

Rebecca: Yeah.

Regan: I just got back from a three-day conference and all we did was sit around and talk about cool SOTL. And you’re right …came back and sitting on the plane I was texting people with study ideas to collaborate on. It was that exciting.

Rebecca: The more you talk… collaborate… the more it happens.

Regan: There you go.

Rebecca: So, we always wrap up by asking, what’s next?

Regan: You know, I think I like getting the bang for my buck and you mentioned this in the intro: right now I’m working on the American Psych Association’s Introductory Psychology Initiative and what’s next is basically two years of really focusing on the introductory psychology course. It’s taken by close to a 1.5 million students a year and I’d like to make sure we can make that course the best learning experience for our students as possible, so that’s where my energy is gonna be for the next little bit.

John: That’s a big task and a very useful one.

Rebecca: And definitely worthwhile. Well, thank you so much for spending some time with us this afternoon. it’s been eye-opening and exciting… energizing. I can’t wait to look through some of the resources.

Regan: You know, is there anything else that you’d like, get in touch and I welcome anybody listening to get in touch as well.

John: Thank you, and we’ll share links to the resources you mentioned in the show notes.

Regan: Sounds good.

[MUSIC]

John: If you’ve enjoyed this podcast please subscribe and leave review on iTunes or your favorite podcast service. To continue the conversation join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John: Editing assistance provided by Kim Fischer, Brittany Jones, Gabriella Perez, Joseph Santarelli-Hansen, and Dante Perez.

[MUSIC]

49. Closing the performance gap

Sometimes, as faculty, we are quick to assume that performance gaps in our courses are due to the level of preparedness of students rather than what we do or do not do in our departments. In this episode, Dr. Angela Bauer, the chair of the Biology Department at High Point University, joins us to discuss how community building activities and growth mindset messaging combined with active learning strategies can help close the gap.

Show Notes

  • “Success for all Students: TOSS workshops” – Inside UW-Green Bay News (This includes a short video clip in which Dr. Bauer describes TOSS workshops)
  • Dweck, C. S. (2008). Mindset: The new psychology of success. Random House Digital, Inc.
  • Barkley, E. F., Cross, K. P., & Major, C. H. (2014). Collaborative learning techniques: A handbook for college faculty. John Wiley & Sons.
  • Life Sciences Education
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of personality and social psychology, 69(5), 797.
  • Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American psychologist, 52(6), 613.
  • The Teaching Lab Podcast – Angela Bauer’s new podcast series. (Coming soon to iTunes and other podcast services)

Transcript

Coming Soon!

37. Evidence is Trending

Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode, Michelle Miller joins us to discuss recent research that will likely shape the future of higher education.

Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.

Show Notes

Rebecca: Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode we talk to a cognitive psychologist about recent research that will likely shape the future of higher education.
[Music]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[Music]

John: Our guest today is Michelle Miller. Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.
Welcome, Michelle!

Michelle: Hi, I’m so glad to be here.

Rebecca: Thanks for joining us.
Today’s teas are:

Michelle: I’m drinking a fresh peppermint infused tea, and it’s my favorite afternoon pick-me-up.

Rebecca: …and it looks like it’s in a really wonderfully designed teapot.

Michelle: Well, thank you… and this is a thrift store find… one of my favorite things to do. Yeah, so I’m enjoying it.

John: I have Twinings Blackcurrant Breeze.

Rebecca: …and I’m drinking chai today.

Michelle: Pretty rough.

John: We invited you here to talk a little bit about things that you’ve been observing in terms of what’s catching on in higher education in terms of new and interesting innovations in teaching.

Michelle: Right, that’s one of things that I really had the luxury of being able to step back and look at over this last semester and over this last spring when I was on sabbatical… One of the really neat things about my book Minds Online, especially now that it’s been out for a few years, is that it does open up all these opportunities to speak with really engaged faculty and others, such as: instructional designers, librarians, academic leadership, educational technology coordinators… all these individuals around the country who are really, really involved in these issues. It’s a great opportunity to see how these trends, how these ideas, how these innovations are rolling out, and these can be some things that have been around for quite some time and just continue to rock along and even pickup steam, and some newer things that are on the horizon.

John: You’ve been doing quite a bit of traveling. You just got back from China recently, I believe.

Michelle: I sure did. It was a short visit and I do hope to go back, both to keep getting involved in educational innovations there and, hopefully, as a tourist as well. So, I was not there for very long but I had the opportunity to speak at Tsinghua University in Beijing, which is a really dynamic institution that’s been around for about a hundred years. For a while in its history it specialized in things like engineering education polytechnic, but now it’s really a selective comprehensive university with very vibrant graduate and undergraduate programs that are really very relatable for those of us in the United States working in similar contexts. My invitation was to be one of the featured speakers at the Future Education, Future Learning Conference, which was a very interdisciplinary gathering of doctoral students, faculty, even others from the community, who were all interested in the intersection of things like technology, online learning, MOOCs even, and educational research (including research into the brain and cognitive psychology), and bringing all of those together… and it was a multilingual conference. I do not speak Chinese but much of the conference was in both English and Chinese and so I was also able to really absorb a lot of these new ideas. So yes, that was a real highlight of my sabbatical semester and one that I’m going to be thinking about for quite some time.

I should say that part of what tied in there as well is that Minds Online, I’ve just learned, is going to be translated into Chinese and that’s going to come out in May 2019. So, I also got to meet with some of the people who were involved in the translation… start to put together some promotional materials such as videos and things like that.

Rebecca: Cool.

John: Excellent.

Rebecca: So, you’ve had a good opportunity, as you’ve been traveling, to almost do a scavenger hunt of what faculty are doing with evidence-based practices related to your book. Can you share some of what you’ve found or heard?

Michelle: This theme of evidence-based practice, and really tying into the findings that have been coming out of cognitive psychology for quite some time, that really is one of the exciting trends and things that I was really excited to see and hear for so many different quarters I visited in different institutions… and so I would say definitely, this is a trend that is continuing and is increasing. There really does continue to be a lot of wonderful interest and wonderful activity around these real cognitively informed approaches to teaching, and what I think we could call scientifically based and evidence-based strategies. One form this has taken is Josh Eyler’s new book, called How Humans Learn: The Science and Stories behind Effective College Teaching. This is a brand new book by a faculty development professional, and a person coming out of the humanities, actually, who’s weaving together even from his humanities background everything from evolutionary biology to classical research in early childhood education to the latest brain-based research. He’s weaving this together into this new book for faculty. So, that’s one of the things that I’ve noticed and then there’s the issue which i think is another great illustration of best-known practice which is the testing effect and retrieval practice.

John: One of the nice things is how so many branches of research are converging… testing in the classroom, brain-based research, and so forth, are all finding those same basic effects. It’s nice to see such robust results, which we don’t always see in all research in all disciplines.

Rebecca: …and just breaking down the silos in general. The things are all related and finding out what those relationships are… exploring those relationships… is really important and it’s nice to see that it’s starting to open up.

John: We should also note that when you visited here, we had a reading group and we had faculty working on trying to apply some of these concepts, and they’re still doing that… and they still keep making references back to your visit. So, it’s had quite a big impact on our campus.

Michelle: This wasn’t true, I don’t think, when I first entered the teaching profession… and even to the extent when I first started getting interested in applied work in course redesign and in faculty professional development. you would get kind of this pushback or just strange looks when you said “Oh, how about we bring in something from cognitive psychology” and now that is just highly normalized and something that people are really speaking across the curriculum… and taking it and running with it in a lasting ongoing way, not just as a “Oh, well that was an interesting idea. I’m going to keep doing what I’m doing” but really people making some deep changes as you mentioned. This theme of breaking down silos… I mean I think if there’s kind of one umbrella trend that all of these things fits under it’s that breakdown of boundaries. So, that’s one that I keep coming back to, I know, in my work.

So, the idea of retrieval practice, drilling down on that one key finding which goes back a very long ways in cognitive psychology. I think of that as such a good example of what we’re talking about here… about how this very detailed effect in cognition and yet it does have these applications across disciplinary silos. Now when I go to conferences and I say “Okay, raise your hand. How many people have ever heard of retrieval practice? How many people have ever heard of the testing effect? How many people have heard of the book Make it Stick (which really places this phenomena at its center)?” and I’m seeing more hands raising.

With retrieval practice, by the way, we’re talking about that principle that taking a test on something, that retrieving something from memory actively, has this huge impact on future memorability of that information. As its proponents like to say, tests are not neutral from a memory or from a learning standpoint… and while some of the research has focused on very kind of stripped-down laboratory style tasks like memorizing words pairs, there are also some other research projects showing that it does flow out to more realistic learning situations.

So, more people simply know about this, and that’s really the first hurdle, oftentimes, with getting this involved disciplinary sometimes jargon riddled research out there to practitioners and getting it into their hands. So, people heard of it and they’re starting to build this into their teaching. As I’ve traveled around I love to hear some of the specific examples and to see it as well crop up in scholarship of teaching and learning.

Just recently, for example, I ran across and really got into the work of Bruce Kirchhoff who is at University of North Carolina – Greensboro and his area is botany and plant identification. He has actually put together some different really technology-based apps and tools that students and teachers can use in something like a botany course to rehearse and review plant identification. He says in one of his articles, for example, that there just isn’t time in class to really adequately master plant identification. It’s just too complex of a perceptual and cognitive and memory test to do that. So, he really built in from the get-go very specific principles drawn from cognitive psychology… so, the testing effect is in there… there’s different varieties of quizzing and it all is about just getting students to retrieve and identify example after example. It brings in also principles such as interleaving, which we could return to in a little bit, but has to do with the sequencing of different examples… their spacing… So, that’s even planned out exactly how and when students encounter different things that they’re studying. It’s really wonderful. So, for example he and his colleagues put out a scholarship of teaching and learning article talking about how this approach was used effectively in veterinary medicine students who have to learn to identify poisonous plants that they’ll see around their practice. This is something that can be time-consuming and very tough, but they have some good data showing that this technology enhanced cognitively based approach really does work. That’s one example. Coincidentally, I’ve seen some other work in the literature, also on plant identification, where the instructors tagged plants in an arboretum… they went around and tagged them with QR codes… that students can walk up to a plant in the real environment with an iPad… hold the iPad over it… and it would immediately start producing quiz questions that were are specific to exactly the plants they were looking at.
So, those are some of the exciting things that people are taking and running with now that this principle is out there.

Rebecca: What I really love about the two stories that you just shared was the faculty are really designing their curriculum and designing the learning experiences with the students in mind… and what students need and when they need it. So, not only is it employing these cognitive science principles, but it’s actually applying design principles as well. It’s really designing for a user experience and thinking about the idea that if I need to identify a plant, being able to identify it in this situation in which I would need to identify it in makes it much more dynamic I think for a student… but also really meets them where they’re at and where they need it.

John: …and there’s so many apps out there now that will do the plant identification just from imagery without the QR code, that I can see it taking it one step further where they can do it in the wild without having that… so they can build it in for plants that are in the region without needing to encode that specifically for the application.

Michelle: I think you’re absolutely right once we put the technology in the hands of faculties who, as I said, they’re the one to know: “Where are my students at? Where are the weak points? Where are the gaps that they really need to bridge?” and that’s where their creativity is giving rise to all these new applications… and sometimes these can be low-tech as well… or also things that we can put in a face-to-face environment… and I’d like to to share just some experiences that I’ve had with this over the last few semesters.

In addition to trying to teach online with a lot of technology, I also have in my teaching rotation a small required course in research methods in psychology which can be a real stumbling block… the big challenge course… it’s kind of a gateway course to continued progress in our major. So, in this research methods course, some of the things that I’ve done around assessment and testing to really try again to stretch that retrieval practice idea… to make assessments really a more dynamic part of the course and more central part of the course… to move away from that idea that tests are just this kind of every now and again this panic mode opportunity for me to kind of measure in sorts of students and judge them… to make good on that idea that tests are part of learning. So, here’s some of the things that I try to do. For one thing, I took time out of the class almost every single class meeting as part of the routine to have students first of all generate quiz questions out of their textbook. So, we do have a certain amount of foundational material in that course as well as a project and a whole lot of other stuff is going on. So they need to get that foundational stuff.

Every Tuesday they would come in and they knew their routine: you get index cards and you crack your textbook and you generate for me three quiz questions. Everybody does it. I’m not policing whether you read the chapter or not. It’s active… they’re generating it… and also that makes it something like frequent quizzing. That’s a great practical advantage for me since I’m not writing everything. They would turn those in and I would select some of my favorites I would turn those into a traditional looking paper quiz and hand that out on Thursday. I said “Hey, take this like a realistic quiz.” I had explained to them that quizzes can really boost their learning, so that was the justification for spending time on it and then I said: “You know what? I’m not going to grade it either. You take it home because this is a learning experience for you. It’s a learning activity.” so we did that every single week as those students got into that routine.

The second thing that I did to really re-envision how assessment testing and quizzing worked in this particular course, was something inspired by different kinds of group testing and exam wrapper activities I’ve seen, particularly coming out of the STEM field, where there’s been a lot of innovation in this area. What I would do is… we had these high stakes exams at a few points during the semester. But, the class day after the exam, we didn’t do the traditional “Let’s go over the exam.” [LAUGHTER] That’s kind of deadly dull, and it just tends to generate a lot of pushback from students… and as we know from the research, simply reviewing… passing your eyes over the information… is not going to do much to advance your learning. So, what I would do is… I would photocopy all those exams, so it has a secure copy. They were not graded. I would not look at this before we did this… and I would pass everybody’s exams back to them along with a blank copy of that same exam. I assigned them to small groups and I said “Okay, here’s your job. Go back over this exam, fill it out as perfectly as you can as a group, and to make it interesting I said I will grade that exam as well, the one you do with your group, and anything you get over 90% gets added to everybody’s grade. This time it was open book, it was open Google, it was everything except you can’t ask me questions. So, you have each other and that’s where these great conversations started to happen. The things that we always want students to say. So, I would eavesdrop and hear students say “Oh, well you know what, I think on this question she was really talking about validity because reliability is this other thing…” and they’d have a deep conversation about it. I’m still kind of going back through the numbers to see what are the impacts of learning? Are there any trends that I can identify? But, I will say this: in the semesters that I did this, I didn’t have a single question ever come back to me along the lines of “Well, this question was unclear. I didn’t understand it. I think I was graded unfairly.” it really did shut all that down and again extended the learning that I feel students got out of that. Now it meant a big sacrifice of class time, but I feel strongly enough about these principles that I’m always going to do this in one form or another anytime I can can in face-to-face classes.

Rebecca: This sounds really familiar, John.

John: I’ve just done the same, or something remarkably similar, this semester, in my econometrics class which is very similar to the psych research methods class. I actually picked it up following a discussion with Doug McKee. He actually was doing it this semester too. He had a podcast episode on it. It sounded so exciting, I did something… a little bit different. I actually graded it but I didn’t give it back to them because I wanted to see what they had the most trouble with, and then I was going to have them only answer the ones in a group that they struggled with… and it turned out that that was pretty much all them anyway. So, it’s very similar to what you did except I gave them a weighted average of their original grade and the group grade and all except one person improved and the one person’s score went down by two points because the group grade was just slightly lower… but he did extremely well and he wasn’t that confident. The benefits to them of that peer explanation and explaining was just tremendous and it was so much more fun for them and for me and, as you said, it just completely wiped out all those things like “Well, that was tricky” because when they hear their peers explaining it to them the students were much more likely to respond by saying “Oh yeah, I remember that now” and it was a wonderful experience and I’m gonna do that everywhere I can.

In fact. I was talking about it with my TA just this morning here at Duke and we’re planning to do something like that in our classes here at TIP this summer, which i think is somewhat familiar to you from earlier in your academic career.

Michelle: That is right we do have this connection. I was among, not the very first year, but I believe the second cohort of Talent Identification Program students who came in, I guess you would call it now, middle school (back then, it was called junior high) and what a life-transforming experience. We’ve had even more opportunities to talk about the development of all these educational ideas through that experience.

John: That two-stage exam is wonderful and it’s so much more positive… because it didn’t really take, in my class, much more time, because I would have spent most of that class period going over the exam and problems they had. But the students who did well would have been bored and not paying much attention to it; the students who did poorly would just be depressed and upset that they did so poorly… and here, they were actively processing the information and it was so positive.

Michelle: That’s a big shift. We really have to step back and acknowledge that, I think. that is a huge shift in how we look at assessment, and how we think about the use of class time… and it’s not just “Oh my gosh, I have to use every minute to put such content in front of the students.” Just the fact that more of us are making that leap, I think, really is evidence this progress is happening… and we see also a lot of raised consciousness around issues such as learning styles. That’s another one that, when I go out and speak to faculty audiences, 10 years ago you would get these shocked looks or even very indignant commentary when you say “Ok, this idea of learning styles, in the sense that say there are visual learners, auditory learners, what I call sensory learning styles (VAK is another name it sometimes goes by). The idea that that just holds no water from a cognitive point of view…” People were not good with that, and now when I mentioned that at a conference, I get the knowing nods and even a few groans… people like “Oh, yeah. we get that. Now, K-12, which I want to acknowledge it’s not my area, but I’m constantly reminded by people across the spectrum that it’s a very different story in K-12. So, setting that aside… but this is what I’m seeing… that faculty are realizing… they’re saying “Oh, this is what the evidence says…” and maybe they even take the time to look at some of the really great thinkers and writers who put together the facts on this. They say “You know what? I’m not going to take my limited time and resources and spend that on this matching to styles when the styles can’t even be accurately diagnosed and are of no use in a learning situation. So, that’s another area of real progress.

Rebecca: What I am hearing is not just progress here in terms of cognitive science, but a real shift towards really thinking about how students learn and designing for that rather than something that would sound more like a penalty for grade like “Oh, did you achieve? Yes or no…” but, rather here’s an opportunity if you didn’t achieve to now actually learn it… and recognize that you haven’t learned it, even though it might seem really familiar.

John: Going back to that point about learning styles. It is spreading in colleges. I wish it was true at all the departments at our institution, but it’s getting there gradually… and whenever people bring it up, we generally remind them that there’s a whole body of research on this and I’ll send them references but what’s really troubling is in my classes the last couple years now, I’ve been using this metacognitive cafe discussion forum to focus on student learning… and one of the week’s discussions is on learning styles and generally about 95 percent of the students who are freshmen or sophomores (typically) come in with a strong belief in learning styles… where they’ve been tested multiple times in elementary or middle school… they’ve been told what their learning styles are… they’ve been told they can only learn that way… It discourages them from trying to learn in other ways and it does a lot of damage… and I hope we eventually reach out further so that it just goes away throughout the educational system.

Rebecca: You’ve worked in your classes, Michelle, haven’t you to help students understand the science of learning and use that to help students understand the methods and things that you’re doing>

Michelle: Yes, I have. I’ve done this in a couple of different ways. Now, partly, I get a little bit of a free pass in some of my teaching because I’m teaching the introduction to psychology or I’m teaching research methods where I just happen to sneak in as the research example will be some work on say attention or distraction or the testing effect. So, I get to do it in those ways covertly. I’ve also had the chance, although it’s not on my current teaching rotation… I’ve had the chance to also take it on as in freestanding courses. As many institutions are doing these days… it’s another trend… and what Northern Arizona University, where I work, has different kinds of freshmen or first-year student offering for courses they can take, not in a specific disciplinary area, but that really crossed some different areas of the student success or even wellbeing. So, I taught a class for awhile called Maximizing Brain Power that was about a lot of these different topics. Not just the kind of very generic study skills tip… “get a good night’s sleep…” that kind of thing… but really some again more evidence-based things that we can tell students and you can really kind of market it… and I think that we do sometimes have to play marketers to say “Hey, I’m going to give you some inside information here. This is sort of gonna be your secret weapon. So, let me tell you what the research has found.”

So, those are some of the things that I share with students… as well as when the right moment arises, say after an exam or before their first round of small stakes assessments, where they’re taking a lot of quizzes… to really explain the difference between this and high stakes or standardized tests they may have taken in the past. So, I do it on a continuing basis. I try to weave it into the disciplinary aspect and I do it in these free-standing ways as well… and I think here’s another area where I’m seeing this take hold in some different places… which is to have these free-standing resources that also just live outside of a traditional class that people can even incorporate into their courses… if say cognitive psychology or learning science isn’t their area… that they can bring in, because faculty really do care about these things. We just don’t always have the means to bring them in in as many ways as we would like.

John: …and your Attention Matters project was an example of that wasn’t it? Could you tell us a little bit about that?

Michelle: Oh, I’d love to… and you know this connects to what it seems to be kind of an evergreen topic in the teaching and learning community these days, which is the role of distracted students… and I know this past year there just have been these one op-ed versus another. There’s been some really good blog posts by some people I really like to follow in the teaching and learning community such as Kevin Gannon talking about “Okay, do you have laptops in the classroom? and what happens when you do?” and so I don’t think that this is just a fad that’s going away. This is something that the people do continue to care about, and this is where the attention matters project comes in.

This was something that we conceptualized and put together a couple years ago at Northern Arizona University with myself, and primarily I collaborated with a wonderful instructional designer who also teaches a great deal… John Doherty. So, how this came about is I was seeing all the information on distraction… I’m really getting into this as a cognitive psychologist and going “Wow, students need to know that if they’re texting five friends and watching a video in their class. It’s not going to happen for them.” I was really concerned about “What can I actually do to change students minds?” So, my way of doing this was to go around giving guests presentations in every classes where people would let me burn an hour of their class time… and not a very scalable model… and John Doherty respectfully sat through one my presentations on this and then he approached me and said “Look, you know, we could make a module and put this online… and it could be an open access within the institution module, so that anybody at my school can just click in and they’re signed up. We could put this together. We could use some really great instructional design principles and we could just see what happens… and I bet more people would take that if it were done in that format. We did this with no resources. We just were passionate about the project and that’s what we did. We had no grant backing or anything. We got behind it. So, what this is is about a one- to two-hour module that, it’s a lot like a MOOC in that it there’s not a whole lot of interaction or feedback, but there are discussion forums and it’s very self-paced in that way… so one- to two-hour mini MOOCs that really puts at the forefront demonstrations and activities… so we don’t try to convince students about problems with distraction and multitasking… we don’t try to address that just by laying a bunch of research articles on them… I think that’s great if this were a psychology course, but it’s not. So, we come at it by linking them out to videos, for example, that we were able to choose, that we feel really demonstrate in some memorable ways what gets by us when we aren’t paying attention… and we also give students some research-based tips on how to set a behavioral plan and stick to it… because just like with so many areas of life, just knowing that something is bad for you is not enough to really change your behavior and get you not to do that thing. so we have students talking about their own plans and what they do when, say, they’re having a boring moment in class, or they’re really really tempted to go online while they’re doing homework at home. What kinds of resolutions can they set or what kind of conditions can make that that will help them accomplish that. Things like the software blockers… you set a timer on your computer and it can lock you out of problematic sites… or we learned about a great app called Pocket Points where you actually earn spendable coupon points for keeping your phone off during certain hours. This is students talking to students about things that really concern them and really concern us all because I think a lot of us struggle with that.

So, we try to do that… and the bigger frame for this as well is this is, I feel, a life skill for the 21st century… thinking about how technology is going to be an asset to you and not detract from what you accomplish in your life. What a great time to be reflecting on that, when you’re in this early college career. so that’s what we try to do with the project…and we’ve had over a thousand students come through. They oftentimes earn extra credit. Our faculty are great about offering small amounts of extra credit for completing this and we’re just starting to roll out some research showing some of the impacts… and showing it in a bigger way just how you can go about setting up something like this.

Rebecca: I like that the focus seems to be on helping students with a life skill rather than using technology is just a blame or an excuse. We’re in control of our own behaviors and taking ownership over our behaviors is important rather than just kind of object blaming.

Michelle: So, looking at future trends, I would like to see more faculty looking at it in the way that you just described, Rebecca, as this is a life skill and it’s something that we collaborate on with our students… not lay down the law… because, after all, students are in online environments where we’re not there policing that and they do need to go out into work environments and further study and things like that. So, that’s what I feel is the best value. For faculty who are looking at this, if they don’t want to do… or don’t have the means to do something really formal like our Attention Matters approach, just thinking about it ahead of time… I think nobody can afford to ignore this issue anymore and whether you go the route of “No tech in my classroom” or “We’re going to use the technology in my classroom“ or something in between… just reading over, in a very mindful way, not just the opinion pieces, but hopefully also a bit of the research, I think, can help faculty as they go in to deal with this… and really to look at it in another way, just to be honest, we also have to consider how much of this is driven by our egos as teachers and how much of it is driven by a real concern for student learning and those student life skills. I think that’s where we can really take this on effectively and make some progress when we are de-emphasizing that ego aspect and making sure that it really is about the students.

John: We should note there’s a really nice chapter in this book called Minds Online: Teaching Effectively with Technology that deals with these types of issues. It was one of the chapters that got our faculty particularly interested in these issues… on to what extent technology should be used in the classroom… and to what extent it serves as a distraction.

Michelle: I think that really speaks to another thing which I think is an enduring trend… which is the emphasis on really supporting the whole student in success and what we’ve come to call academic persistence… kind of a big umbrella term that has to do with, not just succeeding in a given class, but also being retained… coming back after the first year. As many leaders in higher education point out, this is as a financial issue. As someone pointed out, it does cost a lot less to hang on to the students you have instead of recruiting more students to replace ones who are lost. This is, of course, yet another really big shift in mindset of our own, because after all we did used to measure our success by “Hey, I flunked this many students out of this course” or” Look at how many people have to switch into different majors…our major is so challenging…”

So, we really have turned that thinking around and this does include faculty now. I think that we did used to see those silos. We had that very narrow view of “I’m here to convey content. I’m here to be an expert in this discipline, and that’s what I’m gonna do…” and sure, we want to think about things like do students have learning skills? Do they have metacognition? Are they happy and socially connected at the school? Are they likely to be retained so that we can have this robust university environment?

We had people for that, right? It used to be somebody else’s job… student services or upper administration. They were the ones who heard about that and now I think on both sides we really are changing our vision. More and more forward-thinking faculty are saying “You know what? Besides being a disciplinary expert, I want to become at least conversant with learning science. I want to become at least conversant with the science of academic persistence…” There is a robust early literature on this and that’s something that we’ve been working on at NAU over this past year as well… kind of an exciting newer project that I like very much. We’ve started to engage faculty in a new faculty development program called Persistence Scholars and this is there to really speak to people’s academic and evidence-based side, as well as get them to engage in some perspective-taking around things like the challenges that students face and what it is like to be a student at our institution. We do some really selected readings in the area we look at things like mindset… belongingness… these are really hot areas in that science of persistence… in that emerging field. But, we have to look at it in a really integrated way.

It’s easy for people to say just go to a workshop on mindset and that’s a nice concept, but we wanted to think about it in this bigger picture… really know what are some of the strengths of that and why? Where do these concepts come from? What’s the evidence? That’s something that I think is another real trend and I think as well we will see more academic leaders and people in staff and support roles all over universities needing to know more about learning science. There are still some misconceptions that persist, as we’ve talked about. We’re making progress in getting rid of some of these myths around learning, but I will say… I’m not gonna name any names… but, every now and again I will hear from somebody who says “Oh well, we need to match student learning styles” or “Digital natives think differently, don’t you know?” and I have to wonder whether that’s a great thing. I mean, these are oftentimes individuals that have the power to set the agenda for learning all over a campus. Faculty need to be in the retention arena and I think that leaders need to be in the learning science arena. The boundaries is breaking down and it’s about time.

Rebecca: One of the things that I thought was really exciting with the reading groups that we’ve been having on our campus… that we started with your book, but then we’ve read Make it Stick and Small Teaching since… is that a lot of administrators in a lot of different kinds of roles engaged with us in those reading groups, it wasn’t just faculty. There was a mix of faculty, staff, and some administrators, and I think that that was really exciting. For people who don’t have the luxury of being in your persistence scholar program, what would you recommend they read to get started to learn more about the science of persistence?

Michelle: I really, even after working with this for quite some time, I loved the core text that we have in that program, which is Completing College by Vincent Tinto. It’s just got a great combination of passionate and very direct writing style. So, there’s no ambiguity, there’s not a whole lot of “on the one hand this and on the other hand that.” It’s got an absolutely stellar research base, which faculty of course appreciate… and it has a great deal of concrete examples. So, in that book they talk about “okay, what does it mean to give really good support to first semester college students? What does that look like?” and they’ll go out and they’ll cite very specific “Here’s a school and here’s what they’re doing… here’s what their program looks like… here’s another example that looks very different but gets at the same thing.” So, that’s one of the things that really speak to our faculty… that they really appreciated and enjoyed.

I think that as well we tested good feedback about work that’s come out of the David Yeager and his research group on belongingness and lay theories, and lay theories is maybe a counterintuitive term for kind of a body of ideas about what students believe about academic success and why some people are successful and others are not and how those beliefs can be changed sometimes through relatively simple interventions and when it happens we see great effects such as the narrowing of achievement gaps among students who have more privilege or less privileged backgrounds… and that’s something that, philosophically, many faculty really really care about but they’ve never had the chance to really learn “Okay, how can I actually address something like that with what I’m doing in my classroom, and how can I really know that the things that I’m choosing do have that great evidence base…”

John: …and I think that whole issue is more important now and is very much a social justice issue because, with the rate of increase we’ve seen in college cost inflation, people who start college and don’t finish it are saddled with an awfully high burden of debt. The rate of return to a college degree is the highest that we’ve ever seen and college graduates end up not only getting paid a lot more but they end up with more comfortable jobs and so forth… and if we really want to move people out of poverty and try to reduce income inequality, getting more people into higher education and successfully completing higher education is a really important issue. I’m glad to see that your institution is doing this so heavily and I know a lot of SUNY schools have been hiring Student Success specialists. At our institution they’ve been very actively involved in the reading group, so that message is spreading and I think some of them started with your book and then moved to each of the others. So, they are working with students in trying to help the students who are struggling the most with evidence-based practices …and I think that’s becoming more and more common and it’s a wonderful thing.

Rebecca: So, I really liked Michelle that you were talking about faculty getting involved in retention and this idea of helping students develop persistence skills, and also administrators learning more about evidence-based practices. There’s these grassroots movements happening in both of these areas. Can you talk about some of the other grassroots movements that are working toward, or efforts that faculty are making to engage students and capture their attention and their excitement for education?

Michelle: Right, and here I think a neat thing to think about too is just it’s the big ambitious projects… the big textbook replacement projects or the artificial intelligence informed adaptive learning systems… those are the things that get a lot of the press and end up in The Chronicle of Higher Education that we read about… But, outside of that, there is this very vibrant community and grassroots led scene of developing different technologies and approaches. So, it really goes back for a while. I mean, the MERLOT database that I do talk about in Minds Online has been trove for years of well hidden gems that take on one thing in a discipline and come at it from a way that’s not just great from a subject-matter perspective but brings up the new creative approaches. In the MERLOT database, for example, there’s a great tutorial on statistical significance and the interrelationship between statistical significance and issues like simple sizes. You know, that’s a tough one for students, but it has a little animation involving a horse and a rider that really turns it into something that’s very visual… that’s very tangible… and it really actually tying into analogies, which is a well-known cognitive process that can support the advancement of learning something new. There is something on fluid pressures in the body that was treated for nursing students by nurses, and it’s got an analogy of a soaker hose that this is really fun and is actually interactive. So, those are the kinds of things. The PhET project, P-h-E-T which comes out of University of Colorado, that has been around for a while… again, faculty-led and a way to have these very useful interactive simulations for concepts in physics and chemistry. So, that’s one. CogLab, that’s an auxiliary product that I’ve used for some time in like hundred psychology courses that simulates very famous experimental paradigms which are notoriously difficult to describe on stage for cognitive psychology students. That started out many years ago as a project that very much has this flavor of “We have this need in our classroom. We need something interactive. There’s nothing out there. Let’s see what we can build.” It has since then picked up and turned into a commercial product, but that’s the type of thing that I’m seeing out there.

Another thing that you’ll definitely hear about if you’re circulating and hearing about the latest project is virtual reality for education. So, with this it seems like, unlike just a few years ago, almost everywhere you visit you’re going to hear that “Oh, we’ve just set up a facility. We’re trying out some new things.” This is something that I also heard about when I was talking to people when I was over in China. So, this is an international phenomenon. It’s going to pick up steam and definitely go some places.

What also strikes me about that is just how many different projects there are. Just when you’re worried that you’re going to be scooped because somebody else is going to get there first with their virtual reality project you realize you’re doing very very different things. So, I’ve seen, for example, it used in a medical application to increase empathy among medical students… and I took a six or seven minute demonstration that just was really heart-rending, simulating the patient experience with a particular set of sensory disorders… and at Northern Arizona University we have a lab that is just going full-steam in coming up with educational applications such as interactive organic chemistry tutorial that is is just fascinating. We actually completed a pilot project and are planning to gear up a much larger study next semester looking at the impacts of this. So, this is really taking off for sure.

But, I think there are some caveats here. We still really need some basic research on this… not just what should we be setting up and what the impacts are but how does this even work? In particular, what I would like to research in the future, or at least see some research on, is what kinds of students… what sort of student profile… really gets the most out of virtual reality for education. Because amidst all the very breathless press that’s going on about this now and all the excitement, we do have to remember this is a very, very labor intensive type of resource to set up. You’re not just going to go home and throw something together for the next week. It takes a team to build these things and to complete them as well. If you have, say, a 300 student chemistry course (which is not atypical at all… these large courses), you’re not going to just have all of them spend hours and hours and hours doing this even with a fairly large facility. It’s a very hands-on thing to guide them through this process, to provide the tech support, and everything else.

So, I think really knowing how we can best target our efforts in this area, so that we can build the absolute best, with the resources we have, and maybe even target and ask the students who are most likely to benefit. I think those are some of the things that we just need to know about this. So, it’s exciting for somebody like me who’s in the research area. I see this as a wonderful open opportunity… but those are some of the real crossroads we’re at with virtual reality right now.

Rebecca: I can imagine there’s a big weighing that would have to happen in terms of expense and time and resources needed to startup versus what that might be saving in the long run. I can imagine if it’s a safety thing that you want to do a virtual reality experience, like saving people’s lives and making sure that they’re not going to be in danger as they practice particular skills, could be a really good investment in these… spending the resources to make that investment… or if it’s a lot of travel that would just be way too expensive to bring a bunch of students to a particular location… but you could virtually… it seems like it would be worth the start-up costs and those are just two ideas off the top of my head where it would make sense to bend all of that resource and time.

John: …and equipment will get cheaper. Right now, it’s really expensive for computers that have sufficient speed and graphics processing capability and the headsets are expensive, but they will come down in price, but as you said, it’s still one person typically and one device… so it doesn’t scale quite as well as a lot of other tools or at least not at this stage.

Rebecca: From what I remember, Michelle, you wrote a blog post about [a] virtual reality experience that you had. Can you share that experience, and maybe what stuck with you from that experience?

Michelle: Right, so I had the opportunity, just as I was getting to collaborate with our incredible team at the immersive virtual reality lab at NAU… one of the things I was treated to was about an hour and a half in the virtual reality setup that they have to explore some of the things that they had… Giovanni Castillo, by the way, is creative director of the lab and he’s the one who was so patient with me through all this. We tried a couple of different things and of course there’s such a huge variety of different things that you can do.
There’s a few things out there like driving simulators that are kind of educational… they’re kind of an entertainment… but he was just trying to give me, first of all, just a view of those… and I had to reject a few of them… I will say, initially, because I am one of the individuals who tends to be prone to motion sickness. So, that limits what I can personally do in VR and that is yet another thing that we’re gonna have to figure out. At least informally, what we hear is that women in particular tend to experience more of this. So, I needed, first of all, to go to a very low motion VR. I wasn’t gonna be whizzing through these environments. That was not going to happen for me. So, we did something that probably sounds incredibly simplistic, but it just touched me to my core… which is getting to play with Google Earth. You can spin the globe and either just pick a place at random or what Giovanni told me is… “You know, I’ve observed that when people do this, when we have an opportunity to interact with Google Earth, they all either go to where they grew up or they’ll go to someplace that they have visited recently or they plan to visit. So, I went to a place that is very special to me and maybe it doesn’t fit into either one of those categories neatly, but it’s my daughter’s University… her school… and I should say that this is also a different thing for me because my daughter goes to school in Frankfort, Germany… an institute that is connected to a Museum. So, I had only been to part of the physical facility… the museum itself… and it was a long time ago… and part of it was closer to the holiday. So, this is my opportunity to go there and explore what it looks like all over… and so, that was an emotional experience for me. It was a sensory experience… it was a social one… because we were talking the whole time… and he’s asking me questions and what kinds of exhibits do they have here… and what’s this part of it. So, that was wonderful. it really did give me a feel for alright, what is it actually like to be in this sort of environment?

I’m not a gamer. I don’t have that same background that many of our students have. So, it got me up to speed on that… and it did show me how just exploring something that is relatively simple can really acquire a whole new dimension in this kind of immersive environment. Now the postscript that I talked about in that blog post was what happened when I actually visited there earlier in the year. So, I had this very strange experience that human beings have never had before… which is from this… I don’t know whether to call it deja vu or what… of going to the settings and walking around the same environment and seeing the same lighting and all that sort of stuff that was there in that virtual reality environment… but this time, of course, with real human beings in it and the changes… the little subtle changes that take place over time, and so forth.

So, how does it translate into learning? What’s it going to do for our students? I just think that time is going to tell. It won’t take too long, but I think that these are things we need to know. But, sometimes just getting in and being able to explore something like this can really put you back in touch with the things you love about educational technology.

Rebecca: I think one of the things that I’m hearing in your voice is the excitement of experimenting and trying something… and that’s, I think, encouragement for faculty in general… is to just put yourself out there and try something out even if you don’t have something specific in mind with what you might do with it. Experiencing it might give you some insight later on. it might take some time to have an idea of what you might do with it, but having that experience, you understand it better… it could be really useful.

John: …and that’s something that could be experienced on a fairly low budget with just your smartphone and a pair of Google cardboard or something similar. Basically, it’s a seven to twelve dollar addition to your phone and you can have that experience… because there’s a lot of 3D videos and 3D images out there on Google Earth as well as on YouTube. So, you can experience other parts of the world and cultures before visiting… and I could see that being useful in quite a few disciplines.

Rebecca: So, we always wrap up with asking what are you going to do next?

Michelle: I continue to be really excited about getting the word out about cognitive principles and how we can flow those in to teaching face-to-face with technology… everything else in between. So, that’s what I continue to be excited about… leveraging cognitive principles with technology and with just rethinking our teaching techniques. I’m going to be speaking at the Magna Teaching with Technology Conference in October, and so I’m continuing to develop some of these themes… and I’m very excited to be able to do that. I’m right now also… we’re in the early stages of another really exciting project that has to do with what we will call neuromyth… So, that may be a term that you’ve turn across in some of your reading. It’s something that we touched on a few times, I think, in our conversation today… the misconceptions that people have about teaching and learning and how those can potentially impact the choices we make in our teaching. So, I’ve had the opportunity to collaborate with this amazing international group of researchers who’s headed up by Dr. Kristen Betts of Drexel University… and I won’t say too much more about it other than we have a very robust crop of survey responses that have come in from, not just instructors, but also instructional designers and administrators from around the world. So, we’re going to be breaking those survey results down and coming up with some results to roll out probably early in the academic year and we’ll be speaking about that at the Accelerate conference, most likely in November. That’s put out by the Online Learning Consortium. So, we’re right in the midst of that project and it’s going to be so interesting to see what has the progress been? What neuromyths are still out there and how can they be addressed by different professional development experiences. We’re continuing to work on the Persistence Scholars Program on academic persistence. So, we’ll be recruiting another cohort of willing faculty to take that on in the fall at Northern Arizona University. I am going to be continuing to collaborate and really work with and hear from John and his research group with respect to the metacognitive material that they’re flowing into foundational coursework and ways to get students up to speed with a lot of critical metacognitive knowledge. So, we’re going to work on that too… and I like to keep up my blog and work on shall we say longer writing project but we’ll have to stay tuned for that.

Rebecca: Sounds like you need to plan some sleep in there too.

[LAUGHTER]

John: Well, it’s wonderful talking to you, and you’ve given us a lot of great things to reflect on and to share with people.

Rebecca: Yeah. Thank you for being so generous with your time.

John: Thank you.

Michelle: Oh, thank you. Thanks so much. It’s a pleasure, an absolute pleasure. Thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Theme music by Michael Gary Brewer. Editing assistance from Nicky Radford.

30. Adaptive Learning

Do your students arrive in your classes with diverse educational backgrounds? Does a one-size-fits-all instructional strategy leave some students struggling and others bored? Charles Dziuban joins us in this episode to discuss how adaptive learning systems can help provide all of our students with a personalized educational path that is based on their own individual needs.

Show Notes

In order of appearance:

Transcript

Coming soon!

26. Assessment

Dr. David Eubanks, created a bit of a stir in the higher ed assessment community with a November 2017 Intersection article critiquing common higher education assessment practices. This prompted a discussion that moved beyond the assessment community to a broader audience as a result of articles in the New York Times, The Chronicle of Higher Education, and Inside Higher Ed. In today’s podcast, Dr Eubanks joins us to discuss how assessment can help improve student learning and how to be more efficient and productive in our assessment activities.

Dr. Eubanks is the Assistant Vice President for Assessment and Institutional Effectiveness at Furman University and Board Member of the Association for the Assessment of Learning and Higher Education.

Show Notes

  • Association for the Assessment of Learning in Higher Education (AAHLE)
  • Eubanks, David (2017). “A Guide for the Perplexed.” Intersection. (Fall) pp. 14-13.
  • Eubanks, David (2009). “Authentic Assessment” in Schreiner, C. S. (Ed.). (2009). Handbook of research on assessment technologies, methods, and applications in higher education. IGI Global.
  • Eubanks, David (2008). “Assessing the General education Elephant.” Assessment UPdate. (July/August)
  • Eubanks, David (2007). “An Overview of General Education and Coker College.” in Bresciani, M. J. (2007). Assessing student learning in general education: Good practice case studies (Vol. 105). Jossey-Bass.
  • Eubanks, David (2012). “Some Uncertainties Exist.” in Maki, P. (Ed.). (2012). Coming to terms with student outcomes assessment: Faculty and administrators’ journeys to integrating assessment in their work and institutional culture. Stylus Publishing, LLC.
  • Gilbert, Erik (2018). “An Insider’s Take on Assessment.” The Chronicle of Higher Education. January 12.
  • Email address for David Eubanks: david.eubanks@furman.edu

Transcript

Rebecca: When faculty hear the word “assessment,” do they:(a) Cheer?; (b) Volunteer?; (c) Cry?; Or (d) Run away?

In this episode, we’ll review the range of assessment activities from busy work to valuable research.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

Rebecca: Today’s guest is David Eubanks, the Assistant Vice President for Assessment and Institutional Effectiveness at Furman and Board Member of the Association for the Assessment of Learning and Higher Education. Welcome, David.

John: Welcome.

David: Thank you. It’s great to be here. Thanks for inviting me.

John: Today’s teas are… Are you drinking tea?

David: No, I’ve been drinking coffee all day.

John: Ok, that’s viable.

Rebecca: We’ll go with that. We stop fighting it at this point.

David: Was I suppose to?

John: Well, it’s the name of the podcast…

David: Oh, oh, of course! No, I’m sorry. I’ve been drinking coffee all day… did not do my homework.

Rebecca: I’m having a mix of Jasmine green tea and black tea.

John: I’m drinking blackberry green tea.

David: I do have some spearmint tea waiting for me at home if that counts.

John: Okay. That works.

Rebecca: That sounds good. It’s a good way to end the day.

John: How did you get interested in and involved with assessment?

David: I wasn’t interested, I wanted nothing to do with it. So I was in the Math department at Coker College… started in 1991… and then the accreditation cycle rolls around every 10 years. So, I got involved in sort of the department level version of it, and I remember being read the rules of assessment as they existed then… and we wrote up these plans…. and I could sort of get the idea… but I really didn’t want much to do with it. This is probably my own character flaw. I’m not advocating this, I’m just saying this is the way it was. So I wrote this really nice report, and the last line of the report was something like: “it’s not clear who’s going to do all this work.” [LAUGHS] Because it sure wasn’t gonna be me… at least that was my attitude. But as the time went on …

Rebecca: I think that’s an attitude that many people share.

David: Right, yeah. As time went on, and I began to imbibe from the atmosphere of the faculty and began to complain about things, I got more involved in the data work of the university. Because, some of the things I was wanting to complain about had to do with numbers, like financial aid awards and stuff like that. So I ended up getting into institutional research, which was kind of a natural match for my training in Math… and I found that work really interesting… gathering numbers and trying to prognosticate about the future. But the thing is… as a small college institutional research is strongly associated with assessment, just because of the way things work… and so the next time the accreditation rolls around, guess who got put in charge of accreditation and assessment. [LAUGHS] So, I remember taking the manual home with all these policies that we were supposed to adhering to… and spreading everything out and taking notes and reading through this stuff and becoming more and more horrified. If it was a cartoon, my hair would have been standing up… and writing to the President saying: “You know… we’re not doing a lot of this… or if we are, I don’t know about it.” So that was sort of my introduction to assessment. And then, it was really at that point that I had to fill some responsibility for the administration on the whole college and making sure we were trying to follow the rules. So, it evolved from the faculty and not wanting anything to do with it, to turning to the dark side and being administrator and suddenly having to convince other faculty that they really needed to get things done. So that sort of the origin myth.

Rebecca: So, sort of a panic attack followed by…. [LAUGHTER]

David: Well yeah… multiple panic attacks. [LAUGHTER]

Rebecca: Yeah.

David: And then, so over the years as I get more involved with the assessment community, I started going to conferences and doing presentations and writing papers and eventually I got on the board of the AALHE, which is the national professional association for people who work in assessment… and started up s quarterly publication for them, which is still going… and so I think I have a pretty good network now within the assessment world…and have a reasonably good understanding of what goes on nationwide, but a particularly good understanding in the South because I also participate in accreditation reviews and so forth.

Rebecca: So like you, I think many other faculty cringe when they hear assessment when it is introduced to them as a faculty member. Why do you think assessment has such a bad rep?

David: Yeah, that’s the thing I’d like to talk about most. Well, part of the problem when we talk about it, and he’s and I think you’ll see this when you look at the articles in The Chronicle, in The New York Times, and the Inside Higher Ed, is that it means different things and people can very easily start talking across each other, rather than to each other… and I think in sort of a big picture… if you imagine the Venn diagram from high school math class and there’s three circles. One circle is kind of the teaching and learning stuff that individual faculty members get interested in at the course level or maybe a short course sequence level… their cluster of stuff… and then another one of those circles is the curriculum level, where we want to make sure that the curriculum makes sense and it sort of adds up to something in the courses if they’re…. calculus one, two, three… actually act like a cohesive set of stuff… and then there’s the third circle in the diagram and that’s where the problem is, I think. In the best world, we can do research… we can do real educational research on how students develop over time and how we affect them with teaching. But if we dilute that too much… if we back off of actual research standards and water it down to the point where it’s just very, very casual data collection… it’s still okay if we treat it like that… but I think what the rub becomes…. because of some expectations for many of us in accreditation, is that we collect this really informal data and then have to treat it as if it’s really meaningful, rather than using our innate intuition and experience as teachers and having experience with students. So I think the particulars… the rock in the shoe if you will… is the sort of forced and artificial piece of assessment that is most associated with the accreditation exercises.

John: Why does it break down that way? Why do we end up getting such informal data?

David: Well, educational research is hard for one thing. It’s a big fuzzy blob. If you think about what happens in order for a student to become a senior and write that senior thesis… just imagine that scenario for a minute… and we’re gonna try to imagine that the quality of that senior thesis tells us something about the program the student’s in. Well, the student had different sequences of courses than other students and in many cases… this wouldn’t apply to a highly structured program… For many of us, the students could have taken any number of courses… could have maybe double majored in something else… even within the course selections could have had different professors at different times a day… in different combinations… and so forth. So it’s very unstandardized… and bringing to that, the student then has his or her own characteristics…. like interests and just time limitations, for example… Maybe the students got a job or maybe the student’s not a native English speaker or something. There’s all sorts of traits of the individual student. Anyway, the point is that none of this is standardized. So that when we just look at that final paper that the student’s written, there are so many factors involved, we can’t really say, especially with very small amounts of data, what actually caused what. And my argument is that in the course, the professors in that discipline are in the best situation to, if they put their heads together and talked about what’s the product we’re getting out and what are the likely limitations or strengths of what we’re getting out, are in a really good position to make some informed subjective judgments that are probably much higher quality than some of the forced limited assessments… that are usually forced to be in a numerical scale like rubric ratings or maybe test scores or something like that. So I’m giving you kind of a long-winded answer, but I think the ambition of the assessment program is fine. It’s just that the execution within many many programs doesn’t allow that philosophy to be actually realized.

Rebecca: If our accreditation requirements require us to do certain kinds of assessment and we do the fluffy version, what’s the solution in having more rigorous assessment? or is it that we treat fluffy data as fluffy data and do what we can with that?

David: Right, well as always, it’s easier I think to point out a problem than it is to solve it necessarily. But I do have some ideas… some thoughts about what we could do that would give us better results than what we’re getting now. One of those is, if we’re going to do research, let’s do research. Let’s make sure that we have large enough samples… that we understand the variables and really make a good effort to try to make this thing work as research… and even when we do that, probably the majority of time, it’s going to fail somehow or another because it’s difficult. But at least, we’ll learn stuff that way.

Rebecca: Right.

David: Another way to think of it is if I’ve got a hundred projects with ten students in each one and we’re trying to learn something in these hundred projects, that’s not the same thing as one project with a thousand students in it, right?

Rebecca: Right.

David: It’s why we don’t all try to invent our own pharmaceuticals in our backyards. We let the pharmaceutical companies do that. It’s the same kind of principle. And so we can learn from people… maybe institutions who have the resources and the numbers… we could learn things about how students learn in the curriculum that are generalizable. So that’s one idea… if we’re going to do research, let’s actually do it. Let’s not pretend that something that isn’t research actually is. Another is a real oddity… That is, somehow way back when, somebody decided that grades don’t measure learning. And this has become an a dogmatic item of belief within much of the assessment community in my experience. It’s not hundred percent true but at least in action… and for example, I think there’s some standard advice you would get if you were preparing for your accreditation report: “Oh, don’t use grades as the assessment data because you’ll just be marked down for that.” But in fact, we can learn awful lot from just using the grades that we automatically generate. We can learn a lot about who completes courses and when they complete them. A real example that’s in that “Perplexed” paper is… looking at the data it became obvious that waiting to study a foreign language is a bad idea. The students who don’t take the foreign language requirement the first year they arrive at Furman look like, from the data, that they’re disadvantaged. They get lower scores if they wait even a year. And this is exacerbated, I believe, by students who are weaker to begin with waiting. So those two things in combination, they’re sort of the kiss of death. And this has really nothing to do with how the course is being taught, it’s really an advising process problem… and if we misconstrue it as a teaching problem, we could actually do harm, right? If we took two weeks to do remedial Spanish or whatever when we don’t really need to be doing that, we’re sort of going backwards.

Rebecca: We are blaming the faculty members for the things that aren’t a faculty member’s fault necessarily.

David: Exactly, right. What you just said is a huge problem, because much of the assessment… these little pots of data that are then analyzed are very often analyzed in a very superficial way… where, for example, they don’t take into account that expressed academic ability of the students who are in that class, or whatever it is you’re measuring. So if one year you just happen to have students who were C students in high school, instead of A students in high school, you’re going to notice a big dip in all the assessment ratings just because of that. It has nothing to do teaching necessarily. And at the very least, we should be taking that into account, because it explains a huge amount of the variance that we’re going to get in the assessment ratings. Better students get better assessment ratings, it’s not a mystery.

John: So, should there be more controls for student quality and studies over time of student performance? or should there be some value-added type approaches used for assessment, where you give students pre-tests and then you measure the post-test scores later, would that help?

David: Right, so I think there’s two things going on that are really nice in combination. One is the kind of information we get from grades, which mostly tells us how hard did the student work? how well were they prepared? how intelligent they are or whatever…. However you want to describe it. It’s kind of persistent. At my university the first year grade average of students correlates with their subsequent year’s grade average at 0.79. So it’s a pretty persistent trait. But one disadvantage is that, let’s say Tatiana comes in as an A+ student as a freshman, she’s probably going to be an A+ student as a senior. So we don’t see any growth, right? If we’re trying to understand how students develop, the grades aren’t going to tell us that.

John: Right.

David: So we need some other kind of information that tells us about development. And I’ve got some thoughts on that and some data on that if you want to talk about it, but it’s a more specialized conversation maybe then you want to have here.

John: Well, if you can give us an overview on that argument.

Rebecca: That sounds really interesting, and I’d like to hear.

David: Okay. Well, the basic idea is a “wisdom of the crowds” approach, in that when things are really simple… if we want to know if the nursing student can take a blood pressure reading… then (I assume, I’m not expert on this but I assume) that’s fairly cut and dried and we could have the student do it in front of us and watch them and check the box and say, “Yeah, Sally can do that”. But for many of the things we care about, like textual analysis or quantitative literacy or something, it’s much more complicated and very difficult to reduce to a set of checkboxes and rubrics. So, my argument is for these more complex skills and things we care about, the subjective judgment of the faculty is really valuable piece of information. So what I do is, I ask the faculty at the end of the semester, for something like student writing (because there’s a lot of writing across the curriculum): :”how well is your student writing?” and I ask them to respond on a scale that’s developmental. At the bottom of the scale is “not really doing college-level work yet.” That’s the lowest rating… the student’s not writing at a college level yet. We hope not to see any of that. And then at the upper end of the scale is “student’s ready to graduate.” “I’m the professor. According to my internal metric of what college student ought to be able to do, this student has achieved that.” The professors in practice are kind of stingy with that rating… but what it does is then it creates another data set that does show growth over time. In fact, I had a faculty meeting yesterday… showed them the growth over time in the average ratings of that writing effectiveness scale over four years. If I break it up by the students entering high school grades those are three parallel lines stacked with high grades, medium grades, and low grades in parallel lines. So the combination of those two pieces: grade-earning ability and professional subjective judgment after a semester of observation, seems to be a pretty powerful combination. I can send you the paper on that if you’re interested.

John: Yes.

Rebecca: Yeah, that will be good. Do you do anything to kind of norm how faculty are interpreting that scale?

John: Inter-rater reliability.

David: Right, exactly. That’s a really good question and reliability is one of the first things I look at… and that question by itself turns out to be really interesting. I think when I read research papers it seems like a lot of people think of the reliability as this checkbox that I have to get through in order to talk about stuff I really want to talk about… because if it’s not reliable then I don’t have anything I need to talk about… and I think that’s unfortunate because just the question of “what’s reliable and what’s not” generates lots of interesting questions by itself. So, I can send you some stuff on this too, if you like. But, for example, I got this wine rating data set where these judges blind taste flights of wine and then they have to give it a 1 to 4 scale rating. And this guy published a paper on it and I asked for his data. And so I was able to replicate his findings which were that what the wine tasters most agreed on was when wine tastes bad. If it’s yeah if it’s yucky, we all know it’s yucky. It’s at the upper level when it starts to become an aesthetic that we have trouble agreeing. The reason this is interesting is because usually reliability is just one number, you say how reliable is the judges’ rating and you get .5. That’s it. That’s all the information you get, it’s .5. So what this does is it breaks it down into more detail. So when I do that with the writing ratings, what I find is that our faculty at this moment in time, are agreeing more about “what’s ready to graduate….” and not really about that crucial distinction between not doing college-level writing and the intro college-level writing.

Rebecca: That’s really fascinating. You would almost think it’d be the opposite.

John: I was astounded by this. Yes. And so I got some faculty members together and asked some other faculty members to contribute writing samples that they thought some were good and some were bad. So that I have a clean set to try to test this with and watching them do it.

Rebecca: Right.

David: So yeah, we got in the room and we talked about this, and what I discovered was not at all what I expected. I expected that students would get marked down on the writing if they had lots of grammar and spelling errors and stuff like that. But we didn’t have any papers like that… even the ones that were submitted as the bad papers didn’t have a lot of grammatical errors. So I think that the standards for what the professor’s expect for entry-level writers is really high. And because it’s high, we’re not necessarily agreeing on where those lines are… and that’s where the conversation needs to be for the students sake, right? It’s never going to be completely uniform, but just knowing that this disagreement exists is really advantageous because now we can have more conversations about it.

Rebecca: Yeah, it seems like a great way to involve a teaching and learning center… to have conversations with faculty about what is good writing… what should students come in with… and what those expectations are… so that they start to generate a consensus, so that the assessment tool helps generate the opportunity for developing consensus.

David: Yes, exactly, and I think that’s the best use for assessment is when it can generate really substantive conversations among the faculty who are doing the work of giving the assignments and giving the grades and talking to students.

Rebecca: So, how do we get the rest of the accreditation crowd to be on board with this idea?

David: That’s a really interesting question. I’ve spent some time thinking about that. I think it’s possible. I’m optimistic that we can get some movement in that direction. I don’t think a lot of people are really happy with the current system, because there are so many citations for non-compliance that it’s a big headache for everybody. There are these standards saying every academic program is supposed to set goals… assess whether or not those are being achieved… and then make improvements based on the data you get back. That all seems very reasonable, except that when you get into it and you approach it as this really reductive positivist approach, it implies that the data is really meaningful when in many cases it’s not, so you get stuck. And that’s where the frustration is. So I think one approach is if we can get people to reconsider the value of grades, first of all. And if you can imagine the architecture we’ve setup, it’s ridiculous. So imagine these two parallel lines, on the top we’ve got grades and then there’s an arrow that leads into course completion… because you have to get at least a D usually… and then another arrow that leads into retention (because if you fail out of enough classes you can’t come back or you get discouraged) and that leads to graduation, which leads to outcomes after graduation — like grad school or a career or something. So, that’s one whole line, and that’s been there for a long time. Then under that, what we’ve done is constructed this parallel grading system with the assessment stuff that explicitly disavows any association with any of the stuff on the first line. That seems crazy. What we should have done to begin with is said, “oh, we want to make assessment about understanding how we can assign better grades and give better feedback to students. So they’ll be more successful, so they’ll graduate and have outcomes,” right? That all makes sense. So I think the arguments there… turn the kind of work we’re doing now into a more productive way to feed into the natural epistemology of the institution rather than trying to create this parallel system. That doesn’t really work very well in a lot of cases.

Rebecca: Sounds to me what you’re describing is… right now a lot of assessment is decentralized into individual departments… but I think what you’re advocating for is that it becomes a little more centralized, so that you can start looking at these big picture issues rather than these miniscule little things that you don’t have enough of a data set to study, is that true?

David: Absolutely, yes, absolutely. Some things we just can’t know without more data, partly because the data that we do get is going to be so noisy that it takes a lot of samples to average out the noise. So yes, in fact that’s what I try to do here…. Generate reports based on the data that I have that are going to be useful for the whole University as well as reports that are individualized to particular programs.

Rebecca: Do you work with individual faculty members to work on the scholarship of teaching and learning so maybe there’s something that in particular that they’re interested in studying and given your role in institutional research and assessment? Do you help them develop studies and help collect the data that they would need to find those answers?

David: Yes, I do when they request it or I discover it. It’s not something that I go around and have an easy way to inventory, because there’s a lot of it going on I don’t know about.

Rebecca: Right.

David: I’d say more of my work is really at the department level and this part of assessment is really easy. If you’re in an academic department so much of the time that the faculty meet together gets sucked up with stuff like hiring people, scheduling courses, setting the budget for next year and figuring out how to spend it, selecting your award students, all that stuff can easily consume all the time of all the faculty meetings. So, really just carving out a couple of hours a semester, or even a year, to talk about what is it we’re all trying to achieve and here’s the information… however imperfect it is… that we know about it, can pay big dividends. I think a lot of times that’s not what assessment is seen as. It’s seen as, “oh, it’s Joe’s job this year to go take those papers and regrade them with a rubric, and then stare at it long enough until he has an epiphany about how to change the syllabus.” That’s a bit of a caricature, but there is a lot of that that goes on.

Rebecca: I think it’s my job this year to… [LAUGHS]

David: Oh, really?

John: In the Art department, yeah. [LAUGHS]

Rebecca: I’m liking what you’re saying because there’s a lot of things that I’m hearing you say that would be so much more productive than some of the things that we’re doing, but I’m not sure how to implement them in a situation that doesn’t necessarily structurally buy into the same philosophy.

John: And I think faculty tend to see assessment as something imposed on them that they have to do and they don’t have a lot of incentives to improve the process of data collection or data analysis and to close the loop and so forth. But perhaps if this was more closely integrated into the coursework and more closely integrated into the program so it wasn’t seen as (as you mentioned) this parallel track, it might be much more productive.

David: Right, and one thing I think we could do is ask for reports on grades. Grade completions… there’s all sorts of interesting things that are latent to grades and also course registration. For example, I created these reports… imagine a graph that’s got 100, 200, 300, 400, along the bottom axis… and those are the course levels. I wanted to find out when are students taking these courses. So what you’d expect is the freshmen are taking 100-level courses and the sophomores are taking 200 on average and so forth, right? But whenever I created these reports for each major program, I discovered that there were some oddities… that there were cases for 400-level courses were being taken by students who were nowhere near seniors. So I followed up and I asked this faculty member what’s going on, and it turned out to just be a sort of weird registration situation that doesn’t normally happen, but it had turned out that there were students in that class who probably shouldn’t have been in there. And she said “Thanks for looking into this because, I’m not sure what to do.” So that sort of thing could be routinely done with the current computing power we have now. I think there’s a lot you could ask for that would be meaningful without having to do any extra work, if somebody in the IR or assessment offices is willing to do that.

Rebecca: That’s a good suggestion.

David: And so in the big picture, how do we actually change the accreditor’s mind? It’s not so much really the accreditors, the accreditors do us a great service, I think, by creating this peer-review system. In my experience it works pretty well. The issue I think within the assessment community is that there are a lot of misunderstandings about how this kind of data, these little small pools of data, can be used and what they’re good for. And so what I’ve seen is a lot of attention to the language around assessment during an accreditation review: are the goals clearly stated… it’s almost like did you use the right verb tense, but I’ve never seen that literally. [LAUGHTER] No, there’s pages of words: are there rubrics? do the rubrics look right? and all this stuff and then there’s a few numbers and then there’s supposed to be some grand conclusion to that. It’s not all like that, but there’s an awful lot of it like this so if you’re a faculty member stuck in the middle of it, you’re probably the one grading the papers with a rubric that you already graded once. And you tally up those things and then you’re supposed to figure out something to do with those numbers. So, this culture persists because the reviewers have that mindset that all these boxes have to be checked off. There’s a box for everything except data quality. [LAUGHS] No, literally… if there’s a box for data quality everything would fall apart immediately. So we have to change that culture. We have to change the reviewer culture, and I think one step in doing that is to create a professional organization or using one that exists, like in accounting and librarianship. They have professional organizations that set their standards, right? We don’t have anything like that on assessment. We have professional organizations, but they don’t set the standards. The accreditors have grown (accidentally, I think) into the role of being like a professional organization for assessment. They’re not really very well suited for that. And so, if we had a professional organization setting standards for review that were acknowledging that the central limit theorem exists, for example, then I think we could have a more rational self-sustaining, self-governing system. Hopefully get away from causing faculty members to do work that’s unnecessary.

John: I don’t think any faculty members would object to that.

David: Well, of course not. I mean, you know, everybody’s busy…. you want to do your research… you got students knocking on the door… you gotta prepare for class. And really it’s not just that we’re wasting faculty members time if these assessment numbers that result weren’t good for anything. It’s also the opportunity cost. What could we have done, researching the course completion that would have, by now in the last twenty years we’ve been doing this, saved how many thousands of students. You know there’s a real impact to this, so I think we need to fix it.

John: How have other people in the assessment community reacted to your paper and talks?

David: Yeah, that’s a very interesting question. What has not happened is that nobody’s written me saying “No, Dave you’re wrong. Those samples, those numbers we get from from rating our students are actually really high-quality data.” Now, in fact, probably every institution has some great examples where they’re doing really excellent work trying to measure student learning. Like maybe they’re doing a general education study with thousands of students or something. But, down at the department level, if you’ve only got ten students like some of our majors might have, you really can’t do that kind of work. So I haven’t had anybody even address the question and the response articles saying that, “no, you’re wrong because the data is really good” because the other conclusion if you believe the data is good – the other conclusion is that the faculty are just not using it, right? Or somebody’s not using it. So I guess the rest of the answer the question is the assessment community, I think, is rallying around the idea naturally that they feel threatened by this, and undoubtedly there are faculty members making their lives harder in some cases. That’s unfortunate. It wasn’t my intention. The assessment director is caught in the middle because they are ultimately responsible to what happens when the accreditor comes and reviews them. The peer review team, right? So it’s like a very public job performance evaluation when that happens and so it depends on what region you’re in – there are different levels of severity, but it can be a very very unpleasant experience to have one of those reviews done with somebody who’s got a very checkboxy sort of attitude. It’s not really looking at the big picture and what’s possible, but looking instead at the status of idealistic requirements.

Rebecca: So the way to get the culture shift, in part, requires the the accreditation process to to see a different perspective around assessment… otherwise the culture shift probably won’t really happen.

David: Right, we have to change the reviewers mindset and that’s going to have to involve the accreditors to the extent that their training those reviewers. That’s my opinion.

Rebecca: What role, if any, do you see teaching and learning centers having in assessment in the research around assessment?

David: Well, that’s one of those circles in my Venn diagram you recall, and I think, it’s absolutely critical for the kind of work that has an impact on students, because t’s more focused than say program assessment‘s very often trying to the whole program… which, as I noted, has many dimensions to it. Whereas, a project that’s like a scholarship of teaching and learning project or just a course-based project may have a much more limited scope and therefore has a higher chance of seeing a result that seems meaningful. I don’t think our goal in assessment in that case is to try to prove mathematically that something happened, but to reach a level of belief on the part of those involved that “yes, this is probably a good program that we want to keep doing.” So, I think if the assessment office is producing generalizable information or just background information that would be useful in that context, like “here’s the kind of students we are recruiting,” “here’s how they perform in the classroom” or some other characteristic. For example, we have very few women going into economics. Why is that? Is that interesting to you economists? So those those kinds of questions can be brought from the bigger data set down to those kinds of questions probably.

Rebecca: You got my wheels turning, for sure.

David: [LAUGHS] Great!

Rebecca: Well, thank you so much for spending some of your afternoon with us, David. I really appreciate the time that you spent and all the great ideas that you’re sharing.

John: Thank you.

David: Well, it was delightful to talk to you both. I really appreciate this invitation, and I’ll send you a couple of things that I mentioned. And if you have any other follow-up questions don’t hesitate to be in touch.

Rebecca: Great. I hope your revolution expands.

David: [LAUGHS] Thank you. I appreciate that. A revolution is not a tea party, right?

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

24. Gender bias in course evaluations

Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than professor? For some, this may sound all too familiar. In this episode, Kristina Mitchell, a Political Science Professor from Texas Tech University, joins us to discuss her research exploring gender bias in student course evaluations.

Show Notes

  • Fox, R. L., & Lawless, J. L. (2010). If only they’d ask: Gender, recruitment, and political ambition. The Journal of Politics, 72(2), 310-326.
  • MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303.
  • Miller, Michelle (2018). “Forget Mentors — What We Really Need are Fans.” Chronicle of Higher Education. February 22, 2018..
  • Mitchell, Kristina (2018). “Student Evaluations Can’t Be Used to Assess Professors.Salon. March 19, 2018.
  • Mitchell, Kristina (2017). “It’s a Dangerous Business, Being a Female Professor.Chronicle of Higher Education. June 15, 2017.
  • Mitchell, Kristina M.W. and Jonathan Martin. “Gender Bias in Student Evaluations.” Forthcoming at PS: Political Science & Politics.

Transcript

Rebecca: Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than Professor? For some, this may sound all too familiar. In this episode, we’ll discuss one study that explores bias in course evaluations.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.
Today our guest is Kristina Mitchell, a faculty member and director of the online education program for the Political Science Department at Texas Tech. In addition to research in international trade and globalization, Kristina has been investigating bias in student evaluations, motherhood and academia, women in leadership and academia, among other teaching and learning subjects. Welcome Kristina.

Kristina: Thank you.

John: Today our teas are?

Kristina: Diet coke. Yes, I’ve got a diet coke today.

[LAUGHTER]

Rebecca: At least you have something to drink. I have Prince of Wales tea.

John: …and I have pineapple ginger green tea.

John: Could you tell us a little bit about your instructional role at Texas Tech?

Kristina: Sure, so when I started at Texas Tech six years ago, I was just a Visiting Assistant Professor teaching a standard 2-2 load… so, two face-to-face courses in every semester, but our department was struggling with some issues in making sure that we could address the need for general education courses. So in the state of Texas every student graduating from a public university is required to take two semesters of government (we lovingly call it the “Political Science Professor Full Employment Act”) and so what ends up happening at a university like Texas Tech with pushing forty thousand students almost, is that we have about five thousand students every semester that need to take these courses… and, unless we’re going to teach them in the football stadium, it became really challenging to try and meet this demand. Students were struggling to even graduate on time, because they weren’t able to get into these courses. So, I was brought in and my role was to oversee an online program in which students would take their courses online asynchronously. They log in, complete the coursework on their own time (provided they meet the deadlines), and I’m in a supervisory role. My first semester doing this I was the instructor of record, I was managing all of the TAs, I was writing all the content, so I stayed really busy with that many students working all by myself. But now we have a team of people: a co-instructor, two course assistants, and lots of graduate students. So, I just kind of sit at the top of the umbrella, if you will, and handle the high level supervisory issues in these big courses.

John: Is it self-paced?

Kristina: It’s self-paced with deadlines, so the students can complete their work in the middle of the night, or in the daytime or whenever is most convenient for them, provided they meet the deadlines.

Rebecca: So, you’ve been working on some research on bias in faculty evaluations. What prompted this interest?

Kristina: What prompted this was my co-instructor, a couple of years ago, was a PhD student here at Texas Tech University and he was helping instruct these courses and handle some of those five thousand students… and as we were just anecdotally discussing our experiences in interacting with the students, we were just noticing that the kinds of emails he received were different. The kinds of things that students said or asked of him were different. They seemed to be a lot more likely to ask me for exceptions… to ask me to be sympathetic…. to be understanding of the student situation… and he just didn’t really seem to find that to be the case. So of course, as political scientists, our initial thought was: “we could test this.” We could actually look and see if this stands up to some more rigorous empirical evaluation, and so that’s what made us decide to dig into this a little deeper.

John: …and you had a nice sized sample there.

Kristina: We did. Right now, we have about 5000 students this semester. We looked at a set of those courses. We tried to choose the course sections that wouldn’t be characteristically different than the others. So, not the first one, and not the last one, because we thought maybe students who register first might be characteristically different than the students who register later. So, we took we chose a pretty good-sized sample out of our 5,000 students.

John: …and what did you find?

Kristina: So, we did our research in two parts. The first thing we looked at was the comments that we received. As I said, our anecdotal evidence really stemmed from the way students interacted with us and the way they talked to us. We wanted to be able to measure and do some content analysis of what the students said about us in their course evaluations. So, we looked at the formal in-class university-sponsored evaluation, where the students are asked to give a comment on their professors… and we looked at this for both our face-to-face courses that we teach and the online courses as well. And what we were looking for wasn’t whether they think he’s a good professor or a bad professor, because obviously if we were teaching different courses, there’s not really a way to compare a stats course that I was teaching to a comparative Western Europe course that he was teaching. All we were looking at was what are the themes? What kinds of things do they talk about when they’re talking about him versus talking about me? What kind of language do they use and we also did the same thing for informal comments and evaluation? So, you have probably heard of the website “Rate My Professors”?

John: Yes.

[LAUGHTER]

Kristina: Yes, everyone’s heard of that website and none of us like it very much… and let me tell you, reading through my “Rate My Professors” comments was probably one of the worst experiences that I’ve had as a faculty member, but it was really enlightening in the sense of seeing what kinds of things they were saying about me… and the way they were talking about me versus the way they were talking about him. So again, maybe he’s just a better professor than I am… so we weren’t looking for positive or negative. We were just looking at the content theme… and so the kinds of themes we looked at were: Does the student mention the professor’s personality? Do they say nice… or rude… or funny? Do they mention the professor’s appearance? Do they say ugly… pretty? Do they comment on what he or she is wearing? Do they talk about the competence, like how how well-qualified their professor is to teach this course and how do they refer to their professor? Do they call their professor a teacher? Or do they call their professor rightfully a professor? And these are the categories that we really noticed some statistically significant differences. So we found that my male co-author was more likely to get comments that talked about his competence and his qualification and he was much more likely to be called professor… which is interesting because at the time he was a graduate student. So, he didn’t have a doctorate yet… he wouldn’t really technically be considered a professor… and on the other hand when we looked at comments that students wrote about me, whether they were positive or negative… nice or mean comments… they talked about my personality. They talked about my appearance and they called me a teacher. So whether they were saying she’s a good teacher or a bad teacher… that’s how they chose to describe me.

Rebecca: That’s really fascinating. I also noticed, not just students having these conversations, but in the Chronicle article that you published, there was quite a discussion that followed up related to this topic as well, and in that there was a number of comments where women responded with empathetic responses and also encouraged some strategies to deal with the issues. But, then there was at least one very persistent person, who kept saying things like: “males also are victimized.” How do we make these conversations more productive and is there something about the anonymity of these environments that makes these comments more prevalent?

Kristina: I think that’s a really great question. I wish I had a full answer for you on how we could make conversations like this more productive. I definitely think that there’s a temptation for men who hear these experiences to almost take it personally… as though when I write this article, I’m telling men: “You have done something wrong…” when that’s not really the case… and, my co-author, as we were looking at these results about the comments and as we were reading each other’s comments, so we could code them for what kinds of themes we were observing… he was almost apologetic. He was like: “Wow, I haven’t done anything to deserve these different kinds of comments that I’m getting. You’re a perfectly nice woman, I don’t know why they’re saying things like this about you.” So, I think framing the conversation in terms of what steps can we take to help, because if I’m just talking about how terrible it is to get mean reviews on Rate My Professors, that’s not really giving a positive: “Here’s a thing that you can do to help me…” or “Here’s something that you can do to advocate for me.” So, I think a lot of times what men who are listening need… maybe they’re feeling helpless… maybe they’re feeling defensive…. What they need is a strategy. Something they can do going forward to help women who are experiencing these things.

Rebecca: I noticed that some of the comments in relationship to your Chronicle article indicated ways that minimize your authoritative role to avoid certain kinds of comments and I wonder if you had a response to that… and I think we don’t want to diminish our authoritative roles as faculty members, but I think that sometimes those are the strategies that we’re often encouraged to take.

Kristina: I agree, I definitely noticed that a lot of the response to how can we prevent this from happening got into “How can we shelter me from these students,” as opposed to “How can we teach these students to behave differently.” I definitely think the anonymous nature of student evaluation comments and Rate My Professors and internet comments in general. You definitely notice when you go to an internet comment section that anonymous comments tend to be the worst one. …and so the idea that what we’re observing, it’s not that an anonymous platform causes people to behave in sexist ways, It’s that there’s underlying sexism and the anonymous nature of these platforms just gives us a way to observe the underlying sexism that was already there. So the important thing is not to take away my role as the person in charge. The important thing is to teach students, and both men and women, that women are in positions of authority and that there’s a certain way to communicate professionally. Student evaluations can be helpful. I’ve had helpful comments that help me restructure my course. So, it’s a way to practice engaging professionally and learning to work with women. My students are going to work for women and with women for the rest of their lives. They need to learn, as college students, how to go about doing that.

John: Do you have any suggestions on how we could encourage that they’re part of the culture and in individual courses the impact we have is somewhat limited. What can we do to try to improve this?

Kristina: Well, I’ve definitely made the case previously to others on my campus and at other campuses that the sort of lip service approach to compliance with things like Title 9 isn’t enough. So, I don’t know if there at your institution there’s some sort of online Title 9 training, where you know…

John: Oh, yeah…

Kristina: …you watch a video

Rebecca: Yeah…

Kristina: … you watch a video… you click through the answers… it tells you: “are you a mandatory reporter?” and “what should you do in this situation?” …and I think a lot of people don’t really take that very seriously; it’s just viewed as something to get through so that the university cannot be sued in the case that something happens. So, I don’t think that that’s enough. I think that more cultural changes and widespread buy-in are a lot more important than making sure everyone takes their Title 9 training. So, in our work I mentioned that we did this in two parts, and the second part just looked at the ordinal evaluations. The 1 to 5 scale, 5 being the best… rank your professor how effective he or she is… and not only are students perhaps not very well qualified to evaluate pedagogical practices, but once again we found that even in these identical online courses, a man received higher ordinal evaluations than a woman did. And so what this tells me is in a campus culture we should stop focusing on using student evaluations in promotion and tenure, because they’re biased against women… and we should stop encouraging students to write anonymous comments on their evaluations. We should either make them non-anonymous or we should eliminate the comment section all together. Just because if we’re providing a platform it’s almost sanctioning this behavior. If we’re saying, “we value what you write in this comment,” then we’re almost telling students your sexist comment is okay and it’s valued and we’re going to read it… and that’s not a culture that’s going to foster positive environment for women.

John: Especially when the administration and department review committees use those evaluations as part of the promotion and tenure review process.

Kristina: Exactly. I mean when I think about the prospect of my department chair or my Dean reading through all the comments that I had to read through when I did this research, I’m pretty sure that he would get an idea of who I am as a faculty member that, to me…maybe I’m biased… but to me, is not very consistent with actually what happens in my classroom.

Rebecca: It’s interesting that anonymity.. right, we talk about anonymity providing more of a platform for this become present. But I’ve also had a number of colleagues share their own examples of hate speech and inappropriate sexual language when anonymity wasn’t a veil that they could hide behind, increasingly more recently. So I wonder, if your research shows any increase in this behavior and why?

Kristina: We haven’t really looked at this phenomenon over time. That’s just not something that we’ve been able to look at in our data, but I would like to continue to update this study. I definitely think that… current political climate is creating an atmosphere where perhaps people don’t feel that saying things that are racist or sexist are as shameful as they once perceived them to be. So there’s definitely a big stigma against identifying yourself as Nazi or even Nazi adjacent and that stigma, while it’s still there, the stigma against it seems to be lessening a little bit. I don’t know necessarily that I’ve seen an increase in what kinds of behavior I’m observing from my students, but I definitely will say that a student… an undergraduate student… gave me his number on his final exam this last semester like I was going to call him over the summer. So, it definitely happens in non-anonymous settings too.

John: Now there have been a lot of studies that have looked at the effect of gender on course evaluations, and all that I’ve seen so far find exactly the same type of results. That there’s a significant penalty for being female. One of those, if I remember correctly (and I think you referred to it in your paper), was a study where… it was a large online collection of online classes, where they changed the gender identity of the presenters randomly in different sections of the course, and they found very different types of responses and evaluations.

Kristina: Yes, that was definitely a study that that… I hate to say we tried to emulate because we were limited in what we could do in terms of manipulating the gender identity of the professor… but I think that their model is just one of the most airtight ways to test this. I agree, this is definitely something that’s been tested before. We’re not the first ones to come to this conclusion… I think our research design is really strong in terms of the identical nature of the online courses. At some point, I find myself… when I when I was talking about this research with a woman in political science who’s a colleague of mine… the question is how many times do we have to publish this before people are going to just believe us… that it’s the case. The response tends to be: “Well, maybe women are just worse professors or maybe there’s some artifacts in the data that is causing this statistically significant difference.” I don’t know how many times we have to publish it before before administrations and universities at large take notice… that this is a real phenomenon… that’s not just a random artifact of one institution or one discipline.

John: It seems to be remarkably robust across studies. So, what could institutions do to get around this problem? You mentioned the problem with relying on these for review. Would peer evaluation be better, or might there even be a similar bias there?

Kristina: I definitely think peer evaluation is an alternative that’s often presented, when we’re thinking of alternative ways to evaluate teaching effectiveness. Peer evaluation may be subject to the same biases. So, I don’t know that literature well enough off the top of my head, but I imagine that it could suffer from the same problems in terms of faculty members who are women… faculty members of color… faculty members with thick accents, with English that’s difficult to understand… might still be dinged on their peer evaluations. Although we would hope that people who are trained in pedagogy who’ve been teaching would be less subject to those biases. We could also think about self evaluation. Faculty members can generate portfolios that highlight their own experiences, and say here’s what I’m doing the classroom that makes me a good teacher… here are the undergraduate research projects I’ve sponsored… here the graduate students who’ve completed their doctoral degrees under my supervision… and that’s a way to let the faculty member take the lead in describing his or her own teaching. We could also just weight student evaluations. We know that women receive 0.4 points lower on a five-point scale, then we could just bump them up by 0.4. None of these solutions are ideal. But, I think some of the really sexist and misogynist problems in terms of receiving commentary, that is truly sexually objectifying female professors… that could be eliminated with almost any of these solutions. Peer evaluation… removing anonymous comments… self-evaluation…. and that’s really the piece that is the most dramatically effective in women being able to experience higher education in the same way that men do.

Rebecca: So, obviously if there’s this bias in evaluations then there’s likely to be the same bias within the classroom experience as well. We just don’t necessarily have an easy way of measuring that. But if you’re using teaching strategies that use dialogue and interactions with students rather than a “sage on the stage” methodology, I think that in some cases we make ourselves vulnerable and that does help teaching and learning, because it helps our students understand that we’re not you perfectly experts in everything… that we have to ask questions and investigate and learn things too… and that can be really valuable for students to see. But we also want to make sure that we don’t undermine our own authority in the classroom either. Do you have any strategies or ideas around around like that kind of in-class issue?

Kristina: Yeah, I think that the bias against women continues to exist just in a standard face-to-face class. One time, when I was teaching a game theory course, I was writing an equation on the board and it was the last three minutes of class and we’re trying to rush through you the first-order conditions and all sorts of things… and I had written the equation wrong, and as soon as my students left the classroom I looked at it and I went, “oh my gosh, I’ve written that incorrectly,” and so the next day when they came back to class, I I felt like I had two choices: we could either just move on and I could pretend like it never happened, or I could admit to them, that I taught this wrong… I wrote this wrong. So I did. I told them “Rip out the page from yesterday’s notes because that formula is wrong,” and I rewrote it on the board… and I got a specific comment in my evaluation, saying she doesn’t know what she’s talking about.. that she got that she got this thing wrong… and it was definitely something that, while I don’t have an experimental evidence that says that if a man does the same thing you won’t get penalized in the same way, to me it very much wrapped into that idea that women are are perceived as less qualified as men. So whether it’s because we’ll refer to as teachers or whether it’s because the student evaluations focused more on men’s competence, women are just seen as less likely to be qualified. How many times have you had a male TA and the students go up to the TA to ask questions about the course instead of you. So, I definitely think it’s difficult for women in the classroom to maintain that authority, while still acknowledging that they don’t know everything about everything No professor could. I mean we all think we do of course…. So, I think owning some of the fact that there are things you don’t know is important, no matter what your gender is, but I also try to prime my students I tell them about the research that I do. I tell them about the consistent studies in the literature that exists that shows that students are more likely to perceive and talk about women differently, because I hope that just making them aware that this is a potential issue, might adjust their thinking. So that if they start thinking “wow, my professor doesn’t know what she’s talking about” they might take a moment, and think “would I feel the same way if my professor were a man.”

Rebecca: I think that’s an interesting strategy. We found the similar kind of priming of students about evidence-based practices in the classroom works really well… and getting students to think differently about things that they might be resistant to… So, I could see how that that might work, but I wonder how often men do the same kind of priming on this particular topic.

Kristina: I don’t know. That would be an interesting next experiment to run if I were to do a treatment in two classes face-to-face classes and and you know do have a priming effect for a woman teaching a course versus a man and seeing if it had any kind of different effect. I think a lot of times men perhaps aren’t even aware that these issues exist. So, talking about the way that women experience teaching college in a different way… if men aren’t having this conversation in their classroom, it’s probably not because they’re thinking, “oh man, I really hope my female colleagues get bad evaluations so that they don’t get tenure.” It’s probably just because they aren’t really thinking about this as an issue… just because as a sort of white man in higher education you very much look like what professors have looked like for hundreds of years… and so it’s just a different experience, and perhaps something that men aren’t thinking about… and that’s why I’m getting the message out there so important because so many men want to help. They want to make things more equitable for women and I think when they’re made aware of it, and given some strategies to overcome it, they will. I’ve definitely found a lot of support in a lot of areas in my discipline.

John: …and things like your Chronicle article there’s a good place to start too… just making this more visible more frequently and making it harder for people to ignore.

Kristina: I agree. I think being able to speak out is really important, and I know sometimes women don’t want to speak out, either because they’re not in a position where they can or because they’re fearing backlash from speaking out. So, I think it’s on those of us who are in positions where we can speak up. I think it falls on us to try and say these things out loud, so that women who can’t… their voices are still heard.

John: Going back to the issue of creating teaching portfolios for faculty… that’s a good solution. Might it help if they can document the achievement of learning outcomes and so forth, so that that would free you from the potential of both student bias and perhaps peer bias. So that if you can show that your students are doing well compared to national norms or compared to others in the department, might that be a way of perhaps getting past some of these issues?

Kristina: I definitely think that’s a great place to start, especially in demonstrating what your strategies are to try and help your students achieve these learning outcomes. I always still worry about student level characteristics that are going to affect whether students can achieve learning outcomes or not. Students from disadvantaged backgrounds… students from underrepresented groups… students who don’t come to class or who don’t really care about being in class… these are all students who aren’t going to achieve the learning outcomes at the same rate as students who come to class… who are from privileged backgrounds… and so putting it on a professor alone to make sure students achieve those learning outcomes, still can suffer from some things that aren’t attributable to the professor’s behavior.

John: As long as that’s not correlated across sections, though, that should get swept out. As long as the classes are large enough to get reasonable power.

Kristina: Yeah, absolutely. I think it’s definitely it’s time for more evaluation into into how these measures are useful. I know there’s been a lot of articles in the New York Times op-ed, I think there was one in Inside Higher Ed, really questioning some of these assessment metrics. So, I think the time is now to really dig into these and figure out what they’re really measuring.

Rebecca: You’ve also been studying bias related to race and language, can you talk a little bit about this research?

Kristina: Yes, so this is a piggyback project after after I got finished with the gender bias paper, what I really wanted to do was get into race, gender, and accented English. Because I think not only women are suffering when we rely on student evaluations, it’s people of different racial and ethnic groups… it’s people whose English might be more difficult to understand. What we were able to do in this work is control for everything. So, we taught completely identical online courses the only difference we didn’t even I didn’t even allow the professors to interact with the students via email. I told them to make sure I… like Cyrano de Bergerac…writing all of their emails for them over a summer course and so they were handling the course level stuff just not the student facing things. They were teaching their online course but they weren’t directly interacting with the students in a way that wasn’t controlled… and the the faculty members recorded these welcome videos, which had their face… it had their English, whether it was accented or not… and I’m I asked some students who weren’t enrolled in the course to identify whether these faculty members were minorities and what their gender was. Because what’s important isn’t necessarily how the faculty member identifies – as a minority or not – as whether the students perceive them as minority… and even after controlling for all of that… controlling for everything… when everything was identical, I thought there was no way I was going to get any statistically significant results, and yet we did. So, we controlled even for the final grades in the course… even we controlled for how well students performed… the only significant predictor for those ordinal evaluation scores with whether the professor was a woman and whether the professor was a minority. We didn’t see accented English come up as significant, probably because it’s an online course. They’re just not listening to the faculty members more often than these introductory welcome videos. But we did when we asked students to identify the gender and the race of the professor’s based on a picture. We asked the student: “Do you think you would have a difficult time understanding this person’s English” and we found that Asian faculty members, without even hearing them speak, students very much thought that they would have difficulty understanding their English… and then we have a faculty member here who… blonde hair and blue eyes… but speaks with a very thick Hispanic accent, and the students who looked at his picture… none of them perceived that they would have a difficult time understanding his English. So, I think there’s a lot of biases on the part of students just based on what their professors look like and how they sound.

John: Can you think of any ways of redesigning course evaluations to get around this? Would it help if the evaluations were focused more on the specific activities that were done in class… in terms of providing frequent feedback… in terms of giving students multiple opportunities for expression? My guess is it prob ably wouldn’t make much of a difference.

Kristina: I think, as of now, the way our course evaluations here at Texas Tech University look is that they’re asked to rate their professors you know in a 1 to 5 on things like “did the professor provide adequate feedback?” and “was this course a valuable experience?” and” “was the professor effective?” and that gives an opportunity for a lot of: “I’m going to give five to this professor, but only fours to this professor” even when the behaviors in class might not have been dramatically different. Now this is also speculation, but maybe if there was more of a “yes/no,” “Did the professor provide feedback?” “Were there different kinds of assignment?” “Was class valuable?” Maybe that would be a way to get rid of those small nuances. Like I said, when we did our study, the difference was .4 out of a five-point scale, and so these differences aren’t maybe substantively hugely different. Maybe it’s a difference between you know a 4 and a 4.5. Substantively, that’s not very different. So, maybe if we offered students just a “yes/no,” “Were these basic expectations satisfied?” maybe that could help and that might be something that’s worth exploring. I definitely think that either removing the comment section altogether, or providing some very specific how-to guidelines on what kinds of comments should be provided. I think that that’s the way to address these open-ended say whatever you want… “are you mad? “…are you trying to ask your professor out? …trying to eliminate those comments would be the best way to make evaluations more useful.

John: You’re also working on a study of women in academic leadership. What are you finding?

Kristina: A very famous political science study, done by a woman named Jennifer Lawless, looked at the reasons why women choose not to run for office. So we know that women are underrepresented in elective office, you know the country’s over half women but, we’re definitely not seeing half of our legislative bodies filled with women. What the Lawless and Fox study finds, is not that women can’t win when they run, it’s just that women don’t perceive that they’re qualified to run at all. So, when you ask men, do you think you’re qualified to run for office, men are a lot more likely to say: “oh yeah, totally… I could I could be a Congressman,” whereas women, even with the same kind of qualifications, they’re less likely to perceive themselves as qualified. So, what my co-author Jared Perkins at Cal State Long Beach and I decided to do, is see whether this phenomenon is the same in higher education leadership positions. So one thing that’s often stated is that the best way to ensure that women are treated equally in higher education, is just to put more women in positions of leadership… that we can do all the Title 9 trainings in the world, but until more women are in positions of leadership, we’re not going to see real change…. and we wanted to find out why we haven’t seen that. So you know 56 percent of college students right now are women, but when we’re looking at R1 institutions only about 25% of those university presidents are women, and then the numbers can definitely get worse depending on what subset of universities you’re looking at. We did a very small pilot study of three different institutions across the country. We looked at an R1 and R2 and an R3 Carnegie classification institution. Our pilot study was small, but our initial findings seem to show that that women are not being encouraged to hold these offices at the same rate as men are. So what we saw was that… we asked men “have you ever held an administrative position at a university?” About 60% of the men reported that they had, and about 27% of women reported that they had, and we also asked “Did you ever apply for an administrative position? …and only 21% of the men said that they had applied for an administrative position, while 27% of women said they had applied. Off course it could be that they misunderstood the question… that maybe they thought we meant “Did you apply and not get it?” but we also think that there may be something to explore when it comes to when women apply for these positions they get them. There are qualified women ready to go and ready to apply, but men may be asked to take positions… encouraged to take positions… or appointed to positions where there might be opportunities to say: “There’s a qualified woman. Let’s ask her to serve in this position instead.”

John: That’s not an uncommon result. I know in studies and labor markets starting salaries are often comparable, but women are less likely to be promoted and some studies have suggested that one factor is that women are less likely to apply for higher level positions. Actually, there’s even more evidence that suggests that women are less likely to apply for promotions, higher pay, etc. and that may be at least a common factor that we’re seeing in lots of areas.

Kristina: Absolutely. I definitely think that University administrations need to place a priority on encouraging women to apply for grants, awards, positions, and leadership because there are plenty of qualified women out there, we just need to make sure that they’re actively being encouraged to take these roles.

Rebecca: Which leads us nicely to the motherhood penalty. I know you’re also doing some research in this area about being a mother and in academia, can you talk a little bit about how this impacts some of the other things that you’ve been looking at?

Kristina: Absolutely. The idea to study the motherhood penalty in academia stemmed from reading some of those “Rate My Professor” comments. Because at my institution, we didn’t have a maternity leave policy in place… so I came back to work after two weeks of having my child and I brought him to work. So my department was supportive. I just brought him into my office and worked with the baby for the whole semester… and it was difficult, it was definitely a challenge to try and do any kind of work while a baby is, in the sling, in front of your chest… but one of my “Rate My Professor” evaluations from the semester that I had my son, mentioned that I was on pregnancy leave the whole semester and I was no help. And so this offended me to my core, having been a woman who took two weeks of maternity leave before coming back to work… because I didn’t… I wasn’t on maternity leave the whole semester, and in addition… if I had been, what kind of reason is that to ding a professor on her evaluation? Like she birthed a human child and is having to take care of that child… that shouldn’t ever be something that comes up in a student comment about whether the professor was effective or not.

So what we want to look at are just the ways in which women are penalized when they have children. Even just anecdotally, and our data collection is very much in its initial stages on this project… but as we think through our anecdotal experiences, when department schedule meetings at 3:30 or 4:00 p.m., if women are acting as the primary caregiver for their children (which they often are) this disadvantages them because they’re not able to be there. You have to choose whether to meet your child at the bus stop or to go to this department meeting… or networking opportunities, are often difficult for women to attend if they’re responsible for childcare. Conferences have explored the idea of having childcare available for parents because, a lot of times, new mothers are just not able to attend these academic conferences… which are an important part of networking and most disciplines… because they can’t get childcare. So at the Southern Political Science Association meeting that I went to in January, a woman brought her baby and was on a panel with her baby. So, I think we’re making good strides in making sure mothers are included, but what we want to explore is whether student evaluations will reflect differences in whether they know that their professor is a mother or whether they don’t. So, how would students react if in one class I just said I was cancelling office hours without giving a reason and then in another class, I said it was because I had a sick child or I had to take my child to an event. That’s kind of where we’re going with this project and we really, really hope to dig into what’s the relationship between the motherhood penalty and student evaluation.

Rebecca: Given all of the research that you’re doing and the things that you’re looking at, how do we start to change the culture of institutions?

Kristina: Well, I’m thinking that we’re on the right direction. Like I said, I see a lot more opportunities at conferences for childcare and for women to just bring their children. I see a lot of men who are standing up and saying, “hey, I can help, I’m in a position of power and I can help with this” and what, you know, without our male allies helping us, I mean, men had to give women the right to vote, we didn’t just get that on our own. So, we really count on allies to put us forward for awards. One thing, I think, that’s an important distinction that I learned about from a keynote speaker is the difference between mentoring and sponsoring. So, mentoring is a great activity, we all need a mentor, someone we can go to for advice, someone we can ask for help, someone who can guide us through our professional lives. But what women really need is a sponsor, someone who will publicly advocate for a woman whether that’s putting her in front of the Dean and saying, “Look at the great work she’s doing” or whether it’s writing a letter of recommendation saying, “This woman needs to be considered for this promotion or for this grant.” Sponsorship, I think, is the next step in making sure that women are supported. A mentor might advise a woman on whether she should miss that meeting or that networking opportunity to be with her child. A sponsor would email and say, “we need to change the time because the women in our department can’t come. because they have events that they need to be with their children.”

John: A similar article appeared in a Chronicle post in late February or maybe the first week in March by Michelle Miller where she made a slightly different version. Mentoring is really good… and we need mentors, but she suggested that sometimes having fans would be helpful. People who would just help share information… so when you do something good… people who will post it on social networks and share it widely in addition to the usual mentoring role. So, having those types of connections can be helpful and certainly sponsors would be a good way of doing this.

Rebecca: I’ve been seeing the same kind of research and strategies being promoted in the tech industry, which I’m a part of as well. So, I think it’s a strategy that a lot of women are advocating for and their allies are advocating for it as well. So hopefully we’ll see more of that.

Kristina: I think the idea of fans and someone to just share your work is hugely important. I have to put in a plug for the amazing group: “Women Also Know Stuff.”

Rebecca: Awesome.

Kristina: It’s a political science specific website, but there are many offshoots in many different disciplines and really it’s just the chance that, if you say, “I need to figure out somebody who knows something about international trade wars.” Well, you can go to this website and find a woman who knows something about this, so that you’re not stuck with the same faces… the same male faces,,, that are telling you about current events. So “Women Also Know Stuff” is a great place. They share all kinds of research and they just provide a place that you can look for an expert in a field who is a woman. I promise they exist.

Rebecca: I’ve been using Twitter to do some of the same kind of collection. There might be topics that I teach that I’m not necessarily familiar with… scholars who are not white men… And so, put a plug out like, “hey, I need information on this particular subject. Who are the people you turn to who are not?”

John: You just did that not too long ago.

Rebecca: Yeah, and it, you know, I got a giant list and it was really helpful.

John: One thing that may help alleviate this a little bit is now we have so many better tools for virtual participation. So, if there are events in departments that have to be later, there’s no reason why someone couldn’t participate virtually from home while taking care of a child, whether it’s a male or female. Disproportionately, it tends to be females doing that but you could be sitting there with a child on your lap, participating in the meeting, turning a microphone on and off, depending on the noise level at home, and that should help… or at least potentially, it offers a capability of reducing this.

Rebecca: I know someone who did a workshop like that this winter.

John: Just this winter, Rebecca was doing some workshops where she had to be home with her daughter who wasn’t feeling well and she still came in, virtually, and gave the workshops and it worked really well.

Kristina: Yeah, I definitely think that that’s a great way to make sure that that everyone’s included, whether it’s because they’re mothers or fathers or just unavailable… and I think that’s where we look to sponsors… the department chairs… department leadership to say, “This is how we’re going to include this person in thid activity” rather than it being left up to the woman herself to try and find a way to be included. We need to look to put people in positions of leadership to actively find ways to include people regardless of their family status or their gender.

Rebecca: This has been a really great discussion, some really helpful resources and great information to share with our colleagues across all the places that…

John: …everywhere that people happen to listen… and you’re doing some fascinating research and I’m going to keep following it as these things come out.

Rebecca: …and, of course, we always end asking what are you gonna do next. You have so many things already on the agenda but what’s next?

Kristina: So next up on my list is an article that’s currently under review that looks at the “leaky pipeline.” So the leaky pipeline is a phenomenon in which women, like we were saying, start at the same position as men do, but then they fall out of the tenure track, they fall out of academia more generally… they end up with lower salaries and lower position. So, we’re looking at what factors, what administrative responsibilities, might lead women to fall off the tenure track. We already know that women do a lot more service work and a lot more committee work than men do, so we’re specifically looking at some other administrative responsibilities that we think might contribute to that leaky pipeline.

Rebecca: Sounds great. Keep everyone posted when that comes out and we’ll share it out when it’s available.

Kristina: Thanks.

John: …and we will share in the show notes links to papers that you published and working papers and anything else you’d like us to share related to this. Okay, well thank you.

Kristina: Thank you.
[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

14. Microcredentials

In this episode, we discuss the growing role of microcredentials in higher education with Jill Pippin (Dean of Extended Learning at SUNY-Oswego), Nan Travers (Director of the Center for Leadership in Credentialling Learning at Empire State College), and Ken Lindblom (Dean of the School of Professional Development at the State University of New York at Stony Brook). Jill, Nan, and Ken are members of a State University of New York task force on microcredentials.

Transcript

Rebecca: Our guests today are: Jill Pippin, the Dean of Extended Learning at SUNY-Oswego; Nan Travers, the Director of the Center for Leadership and Credentialing Learning at Empire State College; and Ken Lindblom, the Dean of the School of Professional Development at the State University of New York at Stony Brook.

John: Welcome, everyone!

Nan: Thank you. Hello.

Jill: Thank you.

Ken: It’s good to be here.

John: Our teas today are:

Rebecca: Jasmine green tea.

John: Jill?

Jill: I actually don’t drink tea.

John: Oh… here we go again…. Okay, Nan?

Nan: I’m drinking Celestial Seasons Bengal Spice.

John: …and, Ken?

Ken: My tea today is coffee.
[laughter]

John: …we get a lot of that…. Ok.
…and I have black raspberry green tea from Tea Republic.
So, today we’re going to be talking about microcredentials. Would someone like to tell us a little bit about what microcredentials are?

Ken: Sure, I’d be happy to tell you a bit about what microcredentials are. So there are traditional microcredentials that most people know all about, such as certificates, minors, or just either credit or non-credit certificates. So they’re pieces of larger degrees, but there are now new digital microcredentials that are having a bigger impact on the field, and that internet technology has allowed us to take more advantage of. So there are internet certificates and there are also digital badges, which are icons that can be put on a LinkedIn resume or shared through somebody’s website or on a Twitter feed… and they indicate that the earner of the microcredential has developed particular skills or abilities that will be useful in the workplace.

Nan: …and just to add to what Ken has said, with the open digital badges that are out there, they actually hold on to all of the information around the assessed learning…. the different competencies that an individual has, and the ways in which they’ve assessed it. So if they’re used, let’s say, in the workplace, an employer could actually click into the badge and be able to see exactly how the person has been assessed… which gives a lot of information that a traditional transcript does not give, because it does have that background information in there.

John: Who can issue microcredentials? or who does issue microcredentials?

Jill: …really industry, colleges, various and sundry types of organizations.

Ken: Yeah, in fact, Jill’s right. There’s no real regulation of microcredentials right now. So they can be given by any group that simply creates a microcredential and awards it to someone… and then they say what it is. So the microcredential’s value is really based on the reputation of the issuer.
Honestly, universities and colleges are pretty slow to get to this kind of technology, as we often are. So it’s new for us, but there are private companies that have been issuing them, and there been individual instructors at the college, and especially at the k-12 level, who have been using badge technology to motivate and to assess student work for quite a few years… but for the university level, this is exciting new territory that we’re really jumping into now.

Jill: Yeah, microcredentials are shorter… they’re more flexible…. and they’re very skill based… and so they’re new for colleges, I think in a lot of ways….. maybe not so much for our non-credit side of the house… those that have been doing training programs and things are very practical… skill-based pieces… but in terms of having ladders to credit and having credit courses seen through the lens of a smaller chunk of time, and of topic area, and focus… I think that’s the real change or the real difference in micro-credentialing than from a traditional environment…

Nan: …and what’s really important here is that the demand for these really, in many ways, is coming from industry where they really need better signals as to what people know and what they can do, and as Jill just mentioned, that they’re very skills based. This enables somebody to be able to get a good idea about what a potential employee is able to do. So the demand for microcredentials is really increasing, as industry are using them more and more and there’s many different groups that are really focused on using either the microcredentials, or specifically the badges (which is really a type of microcredential). There are some projects right now where there are whole cities that have come together and have been developing microcredentials and badging systems to make sure that all people in the community have the ability to show those skills as they go for employment. There are also some companies that are starting to come out. For example, there’s a company called “Degreed,” which is degreed.com. It’s a company that enables people to get their skills assessed and microcredentialed, and at the same time working with companies… there’s some big companies such as Bank of America… there’s many other ones that are on their website listed… and they work with the companies and identify the different skills that people need… and then credential the people who are trying to apply with those…. so that there’s a real matching. It becomes a competency-based employment matching system in many ways.

Ken: Some of the ways that badges have been useful are exactly what Nan and Jill are saying, that it’s come from the employers who are asking for specific information about what students will come to them with. We are also able to develop badges in concert with specific employers, if there’s particular training or education or sets of skills or abilities that they’d like their applicants to have… but there’s also another great advantage to microcredentials, particularly badges, that allow us to show the in-depth learning that goes on in classes. My other hat, other than Dean, is that I’m a Professor of English, and so in a lot of humanities courses the direct connection to skills isn’t as obvious to people as it is in an area say like teacher education. So what we can do with a badge is we can point out the specific skills that students are developing in a class on rhetorical theory, or on Shakespearean plays, or whatever. We can point out the analytical learning that they’re doing, the kind of critical thinking, the kind of communicative writing, so that those courses translate into the kind of skills that people are looking for… and of course, our students are picking those things up, but now we can make it more visible as a result of the technology of digital badges.

Jill: It’s an exciting time in higher education. I mean it really is, in terms of microcredentials, because higher ed has the opportunity to validate those credentials. A lot of them, as we said before, have been out there… non-credit skill-based smaller chunks of learning… but the idea of having them all kind of on the same playing field… and almost apples-to-apples in terms of validating learning outcomes… and making sure they’re part of a longer pathway toward higher education. It’s really exciting.

John: When someone sees a transcript and sees English 101 or English 373 or Eco 101, it doesn’t really tell the employer that much about what the students actually learned, but the microcredentials provide information about specific skills that would be relevant. Is there much evidence of the impact this has on employability or in terms of career placement?

Nan: There has been some work that is being done on that, and as I mentioned there are some companies that are even starting to get in the field because there is such a high demand for companies to be able to do competency-based hiring. There’s an initiative that the Lumina Foundation has been funding called Connecting Credentials and, in that initiative, they’ve been looking at microcredentials as a piece of that. That initiative has brought together many different businesses, organizations, and higher education together at the table to really discuss ways in which credentials can better serve all of those different sectors… and so some of the work that they have been working on and that can be viewed at connectingcredentials.org has really been looking at some of the impact of microcredentials on employability.

John: Based on that, I would think, that when colleges are coming up with microcredential programs, it might be useful to work with businesses and to get feedback from businesses on what types of skills they’re looking for… for guidance or some help in designing microcredential programs?

Jill: Absolutely.

Ken: Yeah. I can talk a little bit about some experience we’ve had at Stony Brook on that. We’ve been working with an organization called FREE which is Family Residences and Essential Enterprises. They’re a large agency that supports students, children, and adults with disabilities… and we worked with them to create several badges that align directly with their national standards and the certification needs of their employees. So now we’ve got a system where one of the things that their employees need is food literacy. If they’re running a house for people with disabilities, people who need assistance, they have to be able to demonstrate that they’re able to produce healthy nutritious meals… and so once they’ve gone through this training, which is specifically aligned with their curriculum, having earned the badge will demonstrate that the employee has developed that set of skills. We’ve also got one for them on leadership among their managers and we’re developing more… and the fact that we’ve developed that with the employer… and now the employer is actually contracting with us to deliver that instruction to their employees. We’ve done really well and we’ve issued well over a hundred badges to that agency in just about a year.

John: Excellent.

Nan: There’s also, as we think about it from an employability perspective… there is also another important area that’s happening with the microcredentials and the badges in higher education…is to really be looking at some of those more liberal arts kinds of skills: being able to be a good communicator… to have good resiliency… these are also very important pieces that go into being a good worker… and so there are many institutions as we look across the United States that are really looking at some of these broader skills. There’s also some work that’s being done on the student services side which is really looking at how students have been engaging and being involved within the institution. So, there are these other pieces that also help to build that whole person… how somebody really is involved in higher education… what they know… what they can do… and the kinds of different volunteer pieces… as well as the different kinds of things that they have engaged while they are they are there: working in teams, doing different projects. So, there’s lots of different ways of using those badges. There are also some institutions who are using these badges as a beginning point for students. For some people, it’s scary to start at higher ed again, and to be able to take a little bit of a program that’s a smaller program that actually has a credential at the end of it, is a really motivating thing. Students come away saying: “Well I did that. I can do more…” and so it becomes a really good recruitment tool… but it also is a real good student support tool in order to help people start the path of education as well.

Ken: …and you know, Nan, that’s an important point too… and it works the other way for people who are in, let’s say a master’s degree program…. they don’t not learn anything new until the very end when they’re issued the degree… they’re actually building skills and developing abilities all along the way. So, what the digital badge or a microcredential can do is make visible the learning that they’re doing along the way. So after three or four courses, they’ve earned a credential that demonstrates that value. So they don’t have to wait until they finish 10 or 11 courses.

John: So, it lets them have small goals along the way, and they’re able to achieve success, and perhaps help build a growth mindset for those students who might not have done that otherwise.

Ken: Yes.

Nan: Yes.

Ken: Well put, John.

John: How does this integrate with traditional courses? Are there badges that are offered… or a given badge might be offered by multiple courses? or do individual courses offer multiple badges or microcredentials?

Ken: It can go in lots of different ways. There are instructors who build badging into their own classes. Those aren’t really microcredentials the way we’re talking about them. We’re talking about microcredentials that are somewhere between a course and a degree. So, at Stony Brook, for example, we have what we call a university badge program, and in order for a University badge to exist, it must require between 2 and 4 4-credit courses. So a total of 6 to 12 credits, that’s the point at which students can earn a university badge at Stony Brook University. Those courses work together. So, for example, we have a badge in design thinking, and in order to earn that badge students must get at least a “B” on two courses that we have on design thinking. We also have a badge in employer-employee relations within our Human Resources program… and in order to earn that badge, there are three specific classes that students have to take and earn at least a B on each of those classes.

Nan: So, there is also another approach in terms of thinking about how the microcredentials can intersect and kind of interface with the traditional credentials, the traditional degrees, and that’s through different forms of prior learning assessment. So, what we also see is that students come with licenses, certifications, different kinds of these smaller credentials that represent verifiable college-level learning… and through either an individualized portfolio assessment process or, at our institution at SUNY Empire State College, we also have a process called professional learning evaluations… where we go in and evaluate training, licenses, certifications, and those are evaluated for college credit . Those are then also integrated within the curriculum, and treated as… really transfer credit… they’re advanced standing credit. So, students also have the ability to bring knowledge with them through the microcredentials… they’ve been verified by another organization, and then we re-verify that learning at a college level to make sure that it is valid learning for a degree… and then integrate it within the curriculum.

John: In Ken’s case, it sounds like the microcredential is more than a course, in other cases it might be roughly equivalent to a course… or might it sometimes be less than a course? Where a course might provide individuals with specific skills, some which they might have in other courses? or is that less common?

Jill: You’re right, there’s a spectrum. So, for instance if you look at it from a traditional standpoint, a technology course might already have an embedded microcredential in the form of OSHA training, for example. That’s a microcredential, in that particular example, and so we have the opportunity to look at the skill based smaller chunks that may be very specific to an occupation or employers need for someone to have those skills and be able to put some framework around it so that it can be understood and communicated to an employer.

Ken: One of the exciting things about badging and microcredentials right now which Jill alluded to earlier is that there really isn’t any regulation regarding them yet. So when you say a college degree, that has a standardized meaning but when you say a microcredential or a digital badge, there’s no standardized meaning whatsoever, so what we’re doing is we’re creating different versions of microcredentials and the meaning of them is dependent on that specific situation. So one of the things that’s exciting about being a University in a College is we can really bring academic rigor to these no matter how many skills and what level of learning of the digital badge represents… you know because it comes from a university particularly a SUNY it’s going to be a high quality badge. But it’s incumbent upon the one who’s reading the badge to understand what that badge actually means, and depending where it comes from, depending on the size of the badge, and what the number of skills and abilities aligned to it are, the badge means different things and that’s why it’s so important that the badge includes the metadata – all that in depth and formation that you get when you click on the digital badge icon and all of that information pops up.

Nan: In addition, nationally the IMS global learning community has been developing standards and hopefully there’ll be national standards around the data, how that’s reported, and being able to allow people to really understand and compare the attributes of the criteria of how it’s been assessed, and so there’s a great deal of work that’s being done at a national level to really be thinking about how we can have some good standardization and guidelines around what we mean by certain things in the digital badging. So I think that’s something to pay attention to in terms of what’s coming about.

Ken: Yes it’s exciting space before the standardization has been done, because there’s a lot of innovative potential there, but as we standardize there’ll be more comparability and that’ll be easier to do. So, we may lose some of that innovation later but we’ll just have to see. It’s very interesting to be at the beginning of this process like this because degrees were really kind of finalized at the end of the 19th century, and now at the beginning of the 21st century we’re reinventing that kind of work.

John: Now earlier, it was suggested that other groups have been creating micro-credentials in industry and private firms. One of the advantages, I would think, perhaps that colleges and universities would have is a reputation for certifying skills. Does a reputation of colleges perhaps in universities give us a bit of an edge in creating microcredentials compared to industry?
JILL : One would hope, however there are examples of all sorts of industry entities out there that are offering microcredentials – think of the coding academies that are prolific and they’re very skill based, very specific to an industry, in the industries needs the employers understand what that outcome is from that training and they’re able to therefore value it, and the employee is able to communicate it very effectively. But where I think the colleges have an opportunity and universities have an opportunity to really shine here is that this is where we have the experts, we have people who are very well-versed and researched in their area of scholarship, and they’re able to really look at curriculum and validate it, and make sure that it is expressed in terms of college-level learning outcomes.

Nan: In addition, I think that higher ed has the opportunity to really integrate the industry certifications with curriculum and the stacking process bringing in those microcredentials from industry or having them right within the higher ed curriculum and then being able to roll that in and build it into the curriculum, so that a degree, I can imagine, as we evolve higher education over the next decade or so, that people as they graduate… they’re graduating with a college degree, they’re graduating also with microcredentials, and together they’re able to really indicate what a student knows and what a student can do which really can help the student a great deal more than when it’s just a degree that doesn’t really spell out what some of the details about what somebody knows.

Rebecca: I’m curious whether or not there’s any conversations happening with accreditation organizations about micro credentialing and how they might be involved in the conversation.

Nan: So at this point there are conversations that are happening at the accreditation level and for example, every regional accreditation agency has policy around the assessment of learning. Sometimes specifically around prior learning assessment, sometimes around transfer credit, which within those policies they’re really starting to look at how those learning pieces can come in. When it’s on the for-credit side, then there really needs to be a demonstration by the institution that those microcredentials are meeting the same academic standards as the courses are also. So using the accreditation standards and making sure that all policies and procedures are of the same quality and integrity ensures that it all fits together.

Ken: I think it’s not only an opportunity for universities that we’re developing micro-credentials, but I think it’s our responsibility to do so, because the idea of digital badges for example was popularized in the corporate sector before universities got on board and they ran the gamut in terms of quality and value and frankly there are some predatory institutions that award badges that may not have much value at all to students, and yet they can be quite costly. So I think it was very incumbent upon the university to create valuable microcredentials that would have real academic rigor and support behind them. In addition to that, some of these institutions were also using their badge programs to undercut the value of the degree and say “Well, you don’t actually need a college degree with all that fluff, you just need to get the skills training that you’ll get from a badge.” And we know that a college degree delivers far more than just a set of discrete skills, it gives better ways of seeing the fuller world, of understanding the integration of knowledge, of being able to employ social skills along with technical ability, and digital badges at the university level allow us to make those connections more visible. But it also can help us prevent attacks against the university, which are done purely from profiteering perspective sometimes.

Jill: We can provide some validity and some academic integrity to the smaller microcredential world, then I think higher ed as Ken says has a responsibility to do so.

Nan: It also shows a shift in some of the role of higher education where it becomes even more important that we take the lead in helping to integrate people’s skills and their knowledge and then how that relates to work and life. In many ways, the older higher ed… we had a much more of a role of just delivering information and making sure people had information. Now I think our role has really shifted, where we need to take the leadership in the integration of knowledge and learning.

Rebecca: I’m hearing a lot of conversation focusing on skills and lower levels of the Bloom’s taxonomy, so it would be interesting to hear of examples at higher levels of thinking and working.

Ken: Well, Bloom’s taxonomy actually is a taxonomy of skills and domains of knowledge and abilities so that there are certainly skills involved with synthesis and evaluation, which are at the top of Bloom’s taxonomy. So digital badges can work with that. Digital badges… the skills can involve being able to examine a great deal of knowledge and solve specific problems in an industry, and these are the highest levels of application of knowledge and learning.

Nan: In higher ed they’re also being looked at both at the undergraduate and graduate level, and so it’s not just that entry-level piece. Again, we keep talking about licenses and certifications as a type of microcredential, and there are many out there that you cannot acquire until you have reached certain levels of knowledge and abilities. I know we have focused a great deal of this conversation in terms of being skills-based, but in industry they’re really talking about it more as competencies, and the definition of competencies is what you know and what you can do, so it’s both knowledge and skill space, it is not just skill space.

Ken: In fact, one of the issues that some faculty have with microcredentials, particularly digital badges, is that they have a sense that it’s focused too heavily on utilitarian skill, and not focused heavily enough on the larger and the higher levels of learning that Rebecca is talking about. So I think Nan’s bringing in the idea of competency-based learning is really very helpful that way.

John: So, basically those skills could be at any level.
What are some of the other concerns that faculty might have that might lead to some resistance to adopting microcredentials at a given institution?

Nan: So one of the areas that they may talk about is the concern of the integrity. The academic integrity of the microcredential, or of the badge. And what’s important is that each institution really look at their own process for reviewing microcredentials and improving them, especially if they are on the credit side and they’re going to be integrated within the curriculum. So they need to follow the same standards that any course will follow, and that should really help relieve that concern about academic integrity.

Ken: Yeah, in fact the SUNY microcredentials group, which all of us on this podcast are involved with, specifically points out that faculty governance has to be heavily involved in the creation of any digital badge or micro credential program. That’s the whole point of bringing the university level to this. Is that faculty governance that academic input is going to be behind every microcredential that we create. One of the other things that my faculty colleagues have had trouble with, is the very name of digital badges, and they think it sounds a little silly, a little juvenile. They always say, “oh, well, this is just Boy Scouts and Girl Scouts” and so to them it can feel a little silly. It actually doesn’t come from Boy Scouts and Girl Scouts. Digital badges come from gamification and motivational psychologists looked at why people were willing to do so many rote tasks in an online game. Even though they weren’t being paid to do so, and didn’t seem very exciting on its own and what they found is that people were willing to do that because they would earn a badge or they would level up or earn special privileges along the way, and that was very motivating for people. That’s where this technology really came from and then we built more academic rigor into it. The metaphor that I like to use with my faculty colleagues, which was suggested to me by one of my English department colleagues, Peter Manning. He pointed out that in the medieval period in England archers would learn different skills and when they developed a new skill, they would be given a feather of a different color, and then that feather would be put in the cap. So literally a badge is like a feather in the cap, and when you see somebody coming with 8 or 10 feathers of different colors, this is going to be a formidable adversary. Just like people with a few did badges from the SUNY system, they’re gonna be formidable employees.

Jill: The other thing I like to jump in and say too is – the Girl Scout in the Boy Scout badging system if you really know what the badges represent – you know that there are very strident rules learning outcomes and so on involved in attaining the badge. The badge is a way of just demarcing that they attained it. The quality is inherent in the group that’s setting up the equation by which you earn the badge.

John: So it’s still certifying skill.

Jill: It’s still certifying something and again the institution has the ability to determine what that something is, and to make sure that it is of quality.

John: Now one other thing I was thinking is that if an institution instituted a badging system, it might actually force faculty to reflect a little bit on what types of skills they’re teaching in the class, and that could be an interesting part of a curriculum redesign process in a department, because we haven’t always used backwards design where we thought about our learning objectives. Quite often faculty will say, I’d like to teach a course in this because it’s really interesting to me, but perhaps more focus on skills development in our regular curriculum would be a useful process in general.

Jill: I agree.

Ken: I think that’s a great idea, John. We haven’t used the badging system in my school that way yet, but I think it’s a great idea and honestly there are faculty who bristle at the notion that their teaching skills, and digital badging really strikes at the heart of that, in my perspective, elitist attitude about education. We do want to open up students Minds, we do want to expose them to more of the aesthetic pleasures of life, but we also want to help students improve their own lives in material ways as well, and badging can help us make visible, and strengthen the ways in which we do that in higher education. I think we should be very proud of that.

Nan: So again one of the reasons I like to use the word competency, is because it brings the knowledge and skills together, and we’re actually talking about skills as though they are isolated away from the knowledge pieces, and you can’t have skill without knowledge. To develop good knowledge, you need certain skills, and so I think it’s important to really think about this not as two different things that are separated and somehow we all of a sudden are going to be just skills based, but much rather that we’re developing people’s competencies to be highly educated people.

Jill: Very symbiotic really, and I think this is also where you get at the idea of how can non-credit and credit work together. If you’re thinking about them, in terms of the outcomes and developing your class in that way, and if one of those by itself would be something that’s non-credit, and then if you build them all together then you get a course. Or then your couple of graduate courses together, then you get a credential that is something on the way to a graduate degree.

John: This brings us to the concept of stackable credentials or some microcredentials designed to be stackable to build towards higher level credentials.

Ken: Really a micro-credentialing systems, should always be stackable. That’s one of the bedrocks of the whole idea of it. So it’s not required that a student go beyond one microcredential, but microcredentials should always be applicable to some larger credential of some sort. So, for example, all of the university badges at Stony Brook University stacked toward a master’s degree. And in fact we’ve tried to create what’s called a constellation of badges, so that students can wind their way to a master’s degree by using badges… or on their way to a master’s degree they can pick particular badges to help highlight specialties among electives that they can choose. So it’s a way for them to say, yes I have a Master of Arts in Liberal Studies, and as part of that I have a particular specialization in financial literacy, or in teacher leadership, or an area such as that. But yeah, microcredentials should always be able to stack to something larger. And if we do it right, eventually we’ll have a system that works really from the first… from high school to really into retirement, because there can be lifelong learning. That’s involved in microcredentials as well. There’s always more to learn, so there should always be new microcredentials to earn.

Nan: I totally agree with Ken and if we provide different microcredentials and don’t provide how they do stack and build a pathway, then we really have not helped our students. In many ways we have left it, traditionally, historically, left it up to the individual to figure out how their bits and pieces of learning all fit together and we kind of expect that they’ve got the ability to kind of put it all together and apply it in many different ways, and I think that the role that microcredentials is really playing here, is a way of helping us start to talk about these discrete pieces, and then also how they build together and stack, which gives the person the ability to think about how it fits into the whole. I think what microcredentials is doing is opening up higher education, in a way to really be thinking about how to better serve our students, and give them those abilities to take what they know, package it in different ways, be able to apply it in many different ways, and be able to build that lifelong goals, and seeing how it all fits together.

Rebecca: Just thought I’d follow up a little bit. I think a lot of examples that we see are often in tech or in business and those are the ones that seem very concrete to many of us, but for those of you that have instituted some of these microcredentials already, how does it fit into a liberal arts context, which might not be so obvious to some folks?

Nan: There’s actually quite a few examples of microcredentials and badges that are more on the liberal arts side. There’s been some initiatives across the United States where different institutions have been developing, what we can think of as the 21st century skills: communication, problem-solving, applying learning, being resilient. These are some of the kinds of badges that are starting to really evolve out of higher education, which really brings in those different pieces of a liberal arts education, and being able to lift that up and give the students the ability to say, “I’ve got some good problem solving skills and here’s some examples and I can show it through this badge.” When we look at the research in terms of what employers need for the 21st century employee, we’re really looking at very strong liberal arts education that is then integrated into a workplace situation. So I’m seeing a lot more badges being grown in that liberal arts arena.

Ken: Yeah, at Stony Brook University, we have a number of badges that are in the liberal arts. So for example, we have a badge in diverse literatures. So there may be people who wish to earn that just for personal enrichment, but it’s something that might be really interesting to English teachers as well, because by earning a badge in diverse literatures, which requires a minimum of three classes in different areas, different nationalities of literature, teachers will be able to go on to select pieces of literature more appropriate for diverse audiences. They’ll be able to explore greater world literatures because of the background that they’ve had in exploring different literatures in their classes. So, that’s just one example, but of our about 30 badges, about third of them are in those humanities areas. That said, I will acknowledge that they are not anywhere near as popular as the more business oriented and professional oriented badges, where the link to skills simply seems more obvious. So I think that the liberal studie… the liberal arts… the humanities badges.. the connection is not quite as clear and so there’s still a lot of potential there.

Jill: It’s so important for the employers and for the students themselves, but I think almost most importantly the employers to understand what that means. They have to understand you have a microcredential or a badge and problem solving. They have to have some kind of trust, that it’s truly a skill that equates to their workplace situation, and that’s where the online systems where you can actually delve into what’s behind the my credential, is so important. You can really sit there and look at it, and verify that what the competencies and the skills that the individual has attained through earning this badge.

John: So the definition in the metadata is really important and establishing exactly what sort. Now that brings us to another question. At this point each institution that’s using badges is developing its own set of badges and competencies. Has there been any effort at trying to get some standardization and portability of this across institutions or is it too early for that, or do you see it going in that direction at some point?

Ken: John, it certainly hasn’t happened yet, but I do know that the SUNY Board of Trustees at their last meeting started to consider developing working groups to do just what you’re saying. So it’s not so much to standardize what badges are, but rather to standardize reporting and explore ways to help badge earners to explain and demonstrate their badges to employers, and to other schools more easily. So I know that’s where the SUNY system is headed.

Nan: And if it is for credit, then it falls within transfer credit anyway. So really, if it has gone through the appropriate academic curriculum development processes, the governance processes, then it has the same rigor and therefore is very transferable through our policies on transfer. So really what we need to be doing is doing some good work around the non-credit side,…that really helps the transfer of non-credit learning.

Jill: And one way we can do that is by reinvigorating and breathing new life into a 1973 policy that SUNY has on the books for the awarding of CEUs )continuing education units). It has a recommendation in a process by which campuses can take non-credit curriculum and send it up through a faculty expert and it has a certain guideline about how do you come up with an approval process and how many CEUs could be granted for such work. So, there are some skeleton pieces to how SUNY may codify that moving forward, at this point there is not a rule about how to move forward with non-credit. In fact, SUNY I think trying to be responsive to the emergent nature of this very concept, it has not tried to come in and be too prescriptive yet.

John: On the other hand, when students do receive microcredentials at multiple institutions. Let’s say they start at a community college. They move perhaps to Empire State, maybe they move to a four-year college for university, if they don’t finish and get a degree, they still would have some microcredentials that they could use when they go on the market, because many of them perhaps might use Credly or some other system where they can put it on the LinkedIn profile and they still have that certification, which if they just don’t get the degree it just shows them as not being a degree recipient, which actually seems to hurt people in the job market, but perhaps if they could establish that they have been acquiring skills a long way, maybe it might be helpful for students.

Nan: John, that is a really good point. In many ways, our degrees set up a system where if anyone who steps out of a degree has nothing to show for it and therefore is at a disadvantage, and the microcredentials can help demonstrate their progress, and the competencies that they already have, and so it can play a very important role in people’s lives, when students do need to step in and out of higher education.

John: So where do you see microcredentials going in the future? How do you see this evolving?

Ken: It’s in such an amorphous space right now, it’s hard to imagine what it’s going to undulate into. A big part of what’s happening now is what Nan has talked about. An attempt to try to put some boundaries on this and bring some common definitions to bear on the technology and and the idea of a microcredential, but I think it’s going to still expand. What it’ll do is it’s going to increase partnerships among interesting groups. I think in a lot of these, the universities will be at the center of the partnership, but we’ll be bringing in many more student groups, industry partners, government groups, nonprofits. I think it’s going to increase the amount of communication dramatically, and that’s very exciting because for too many years universities have fulfilled that stereotype of the ivory tower, and this is really breaking that down in some very productive ways.

Nan: And when we look at it from a national perspective, and looking at it to see where some of the direction is going with groups such as IMS global, with connecting credentials and other groups, but what we’re really seeing is the prediction that every student would have a comprehensive digital student record, that they would take with them. It becomes a digital portfolio and the badges would be in their microcredentials, any degrees, they’d have an ability to be able to transport themselves in many different directions, because all of that information would be there, and that digital student record would allow anybody to click in and see the metadata behind it, to know what those competencies that people have, and how it was assessed, what it really means so that there’s a real description of that. That would also enable students to have, again the prediction is that students would be able to transfer from institution to institution. They’ll be able to stack up and build their degrees in ways that would really support the student in their whole life pathway. Ken has just mentioned about partnerships. I think that what we would see is a great deal of partnerships across institutions and with institutions in industry, that really start to build these pathways that people can move along with their comprehensive digital student record.

Ken: Nan, can I ask you a question?

Nan: Yes.

Ken: So a few years ago, there was a lot of talk about they termed co-curricular transcripts, which would be the kind of transcript that would include club membership, informal learning, not credited learning, but it sounds like we may be getting beyond that in a really positive way, and that just the idea of a transcript is becoming a little transformed, so that those other kinds of learning will actually be transcripted in the same digital format. Am I reading that right? Do you think that’s where we’re going?

Nan: Yes, I do think that’s where we’re going, Ken. We’re right at the end of a multi-year, multi-institutional project that Lumina funded, looking at these comprehensive digital student records, that go way beyond… also capturing things like clubs and other kinds of things that students engage, but really, they’re competency-based they start to record those competencies, the data behind the competencies, and when students are in a club or when they’re doing other kinds of activities, the kinds of competencies that they’re gaining from those pieces are also being recorded. So it’s not just: “You are a member of a club, what did you really learn and what can you do because of that?” and so I think that we’re gonna see that evolving more and more over the next decade or so.

Ken: That’s great, thank you.

Jill: If I could add to the question about the role of microcredentials evolving. One of the things that I think is going to be happening, and part of why I’m so excited about microcredentials is, I see this as having a nice connection for the non-credit side of the house of colleges and universities to the credit side, because for so many years, non-credit has been connecting with, and trying to serve business and industry, in ways that really have been limited, and so this really opens up the ability to connect and collaborate with credit expertise within the institution, to be able to create those true pathways, from start to finish from the smallest first step along that pathway all the way through, and that’s really exciting, and I think… and I hope… that’s part of this overall discussion we’re having about micro-credentials moving forward. In a lot of ways this is cyclically. We talked about the CEU policy of 1973. There has been this two sides of the house as they say, as I said a number times today, and really we’re all about education and trying to help people to learn things and be able to apply them to their jobs and their lives and having that connection be that much more seamless and clear. I think that’s one of the most exciting things, from my seat at the table.

John: Well, thank you all for joining us.

Nan: Thank you

John: Look forward to hearing more.

Jill: Thanks for having us.

Ken: Pleasure to be here.

Nan: Take care everybody, bye bye.

Jill: Bye, guys.

Rebecca: Thank you.