49. Closing the performance gap

Sometimes, as faculty, we are quick to assume that performance gaps in our courses are due to the level of preparedness of students rather than what we do or do not do in our departments. In this episode, Dr. Angela Bauer, the chair of the Biology Department at High Point University, joins us to discuss how community building activities and growth mindset messaging combined with active learning strategies can help close the gap.

Show Notes

  • “Success for all Students: TOSS workshops” – Inside UW-Green Bay News (This includes a short video clip in which Dr. Bauer describes TOSS workshops)
  • Dweck, C. S. (2008). Mindset: The new psychology of success. Random House Digital, Inc.
  • Barkley, E. F., Cross, K. P., & Major, C. H. (2014). Collaborative learning techniques: A handbook for college faculty. John Wiley & Sons.
  • Life Sciences Education
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of personality and social psychology, 69(5), 797.
  • Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American psychologist, 52(6), 613.
  • The Teaching Lab Podcast – Angela Bauer’s new podcast series. (Coming soon to iTunes and other podcast services)

Transcript

Rebecca: Sometimes, as faculty, we are quick to assume that performance gaps in our courses are due to the level of preparedness of students rather than what we do or do not do in our departments. In this episode, we’ll discuss how community building activities and growth mindset messaging combined with active learning strategies can help close the gap.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

John: Our guest today is Dr. Angela Bower, the chair of the biology department at High Point University. Welcome Angela!

Angela: Thank you John! Hello Rebecca

Rebecca: Hi, welcome!

Angela: Thank you

Rebecca: Today’s teas are…

Angela: I have a piping hot mug of Earl Grey

Rebecca: Yes! Someone drinks tea again! [LAUGHTER]

John: I have ginger peach green tea.

Rebecca: And John get ready for this, Dragon Oolong tea

John: That’s actually very good.

Rebecca: We have a joke that I have my standbys and this was not one of them.

John: We’ve invited you here to talk about your work on reducing the performance gap for underrepresented students in STEM disciplines. Before we discuss this could you tell us a little about the performance gap?

Angela: You bet! Well, I can tell you about my experiences at two different institutions, although I can say that my experiences at these two institutions with respect to the performance gap in foundation STEM courses is not unique to these institutions. It’s a natural phenomenon that if we look at foundation courses that are taught with a traditional lecture format typically, which still is often the case with STEM courses. We see a gap in the performance between majority students and then students from underrepresented groups and this was the case when I taught biology courses at the University of Wisconsin in Green Bay. If we went back and looked at the data in a foundation course in Introduction to Human biology course that we had taught for decades, we could see prior to a change in pedagogy in terms of what was happening in that course, we could see for decades prior a gap in the performance between majority and underrepresented students at that institution. Likewise the same is true at High Point University with the way that we used to teach our introductory biology courses. And this performance gap is really troubling because often times what it means is that the students in those courses that are not performing well, we tend to lose them and in the end we end up having a much lower percentage of students who are members of underrepresented groups choosing to major in the sciences and then of course choosing to pursue professions related to the sciences as well.

Rebecca: Where does this performance gap originate?

Angela: Well that’s a really interesting question and there are a lot of misconceptions about where that performance gap originates. I’ll go back to when I first started teaching. What I assumed which was entirely wrong was that the academic performance gap was an indication of a difference in the academic preparedness of students from underrepresented groups. So I like many other faculty in my field assumed that students were coming to our institutions not having a high school education that prepared them to perform well in these introductory science and math courses. I later came to find out that this was not at all the case. We did a study at University of Wisconsin in Green Bay where we controlled for let’s say SAT scores of these students coming in; so if we looked at maybe white students and black students who came in within the same range of SAT scores specifically their math scores which is a strong indicator of how well they will perform in the course and if we looked at how they performed in the course even when we controlled for SAT Math score, we saw a performance gap. So that suggested to us (and this was the big lightbulb moment for me) this doesn’t have anything to do really with academic preparedness at least with respect to our student population. What it has to do is something that we’re doing in the classroom. We really were teaching in a way that favored the academic performance of students of a particular group or did not enhance the academic performance of our underrepresented minority students and so that’s when we decided to make some changes.

Rebecca: How did you start to figure out what to pinpoint?

Angela: Well interesting how we stumbled upon an approach that worked at Green Bay. We had a variety of instructors teaching this introductory course, so we had multiple sections going on at a given time and as you know is often the case to change what’s happening in the classroom to get instructors to adhere to a different pedagogy; who are often swamped and busy and maybe have taught the course for a certain way for a number of years and are not comfortable with moving to for example a more student-centered format with respect to their courses (which by the way is pretty daunting when you have lecture sections of 150 students). So what we initially decided to do just to see if it would have any impact was we started to do things outside of the classrooms in a way that was very student-led and student centered. Students who were enrolled in this introduction to human biology course in Green Bay were provided with the opportunity to attend workshops outside of the classroom that were totally optional and we call them toss workshops TOSS, which was an acronym that stood for targeted opportunities for success in science. And we chose students to serve as mentors for these TOSS workshops who were science majors who had at least received grades of B or better in these introductory courses, but who had really strong social skills. Then I worked with those student mentors to train them in student-centered pedagogies and also a culturally inclusive classroom approaches and then they would offer these tops workshops at a variety of times during the week when we were sure that we would get a number of students who could attend. In other words we tried to make sure that those TOSS workshop opportunities didn’t interfere with their academic schedule and these weren’t required and we went about recruiting students for these workshops in a really interesting way. We conducted this in collaboration with a man named Dr. Sean Robinson who is now on the Whitewater campus in Wisconsin, but he at the time was working within our Multicultural Center and he worked really hard to recruit underrepresented students who were his advisees in the Multicultural Center and spread the word. Likewise he helped us recruit a fabulous mentor for our students Mr. Junie Lee who was a grad of our program at Wisconsin, who was also very good at networking with these students and we made it very widely known and it became a popular fun thing to do. Because the students who led the toss workshops were very socially engaging they planned fun activities to review the week’s content and it certainly became a destination for these students and it created a really strong sense of community. And over the four years that we ran this TOSS programming what we found from the very first semester that we offered these opportunities outside of the classroom was that we closed the gap and that happened every single semester for eight semesters across those four years. Multiple sections even with different instructors in different pedagogies in these courses having that opportunity to participate in TOSS workshops totally changed the culture and we closed the gap. And so what we learned from that was that a sense of community was I think probably key here. We know this because (well we assumed this) of an interesting analysis that Dr. W Furlong did, who had worked with our office of institutional research in Green Bay. We looked at the student mentors that led the TOSS workshops and we looked at the activities that they had planned for their TOSS workshops in a given semester and I also ranked the students then in terms of the level of difficulty of the exercises that they performed. In other words how academically rigorous were those workshops versus how much social stuff is going on and we actually found an inverse correlation with the academic rigor in the workshops and the impact of the workshops themselves. So in other words when those mentors were just more social and creating a strong sense of community and spending more time just engaging students generally not necessarily reviewing academic content, that’s when we saw the most profound effect. So this is what led us then to really now focus our efforts with respect to closing the performance gap on really the more emotional components of learning, the affective domain of learning. And yeah of course I’m not at all saying the cognitive part isn’t incredibly important because absolutely it is, it would be silly to say that it wasn’t but generally what we found in Wisconsin and now also what we’re finding at High Point University is that we can close that gap by paying really close attention to that effective emotional component of learning and taking really intentional steps to just create a very inclusive community for students and to engage them in very intentional ways.

John: Have there been changes in classrooms as well to try to build the same type of inclusive environment within the classroom setting?

Angela: Yes. So once faculty learned that the impact of TOSS workshops in Green Bay, more and more of them started to pay more attention to what inclusive classroom approaches looked like. and then at High Point University now where we have also done some interventions that have been really successful in closing that performance gap. We now are employing a model where all faculty who teach our introductory biology courses are on board with employing these techniques that intentionally address the affective domain of learning.

John: What techniques did you use at High Point to address the affective domain?

Angela: When I came to High Point University five years ago, of course we pulled up some of the institutional data, the historic data regarding performance of students in our introductory biology courses and of course saw that performance gap, a long-standing one. And so we really tried to build a model that built upon the knowledge I gained in Wisconsin about the importance of building community and addressing the affective domain of learning, we then attempted to employ that uniformly at High Point University. This time in the actual courses themselves and not with these optional outside of the class activities, so at High Point now we’ve got all instructors on board employing approaches in the classroom where they then administer to students our weekly messages that address the affective domain of learning and it involves growth mindset strategies, which is based on the work of the Stanford psychologist Dr. Carol Dweck who wrote a book on growth mindset strategies. And what we are doing at High Point University is providing students with weekly growth mindset messages maybe 5 or 10 minutes at the start of the course on Monday or Tuesday. I prepare maybe two or three PowerPoint slides with a growth mindset message that I send to all instructors before the start of the week and then they show that message to students and they reinforce the message throughout the week. And just to review what it means to employ growth mindset strategies; if a person has a growth mindset they believe that with hard work and persistence they can improve at whatever it is that they are doing. If a person has a fixed mindset they believe that essentially we are how we are and maybe you’re born a science person or maybe you’re born not a science person, maybe you’re a math person, maybe you’re not a math person and so students that have a fixed mindset will very much back away from things that challenge them because if they engage in an activity that challenges them and maybe they don’t perform so well with respect to that academic opportunity, they view that as a negative commentary on their intellect and who wants to publicly out themselves as being not smart or not good at something, so they will shy away from that and of course that has a very negative impact on their performance in the classroom, they don’t seek out the help that they need, if they vomit test they don’t go in and figure out what went wrong in that test. By sending them weekly growth mindset messages we’re telling them we believe you hang in there with hard work and with really smart use of resources that are available to you and figuring out what an effective study strategy is for you, you’re going to be successful in this course, we think you can do it now let’s get down to work. And long story short those growth mindset strategies have been very powerful in helping us close the performance gap, also now at High Point University in our introductory biology courses.

John: What would be some examples of those messages?

Angela: I would send out a variety of messages the ending on the week, some of them were science related and some weren’t. Maybe there was an interesting study that I read over the weekend on neuroplasticity and so of course being biology majors they would have an interest in that you can literally change your brain with practice. There was one study I sent a couple of slides showing MRIs of the brain in people who had intentionally performed or intentionally practiced a video game, it was a race car game where they practice going around a particular track for maybe two hours worth of practice and these MRIs showed literally their brain was changing in response to those two hours of practice so that was one example. I chose things outside of the realm of science and maybe one week we had a couple of slides on NBA superstar Michael Jordan and the fact that he didn’t make his high school basketball team, he got cut from his basketball team but again with hard work and persistence and employing the right strategies he was arguably the greatest basketball player of all times. We have a really funny clip from a Nobel laureates Junger report card from back in his grade school days about his teacher saying you’re never going to be any good at science, you just need to give it up and later he went on to win a Nobel Prize for his work in cloning. These are just a few examples of growth mindset messages that we sent to students on a weekly basis.

Rebecca: Have you noticed a difference between the in-class strategies of growth mindset versus community-focused TOSS workshops that you had done at Green Bay?

Angela: Both have been effective in closing the gap. They’re two very different academic settings, so one thing that the TOSS workshops were really great for in Green Bay was really fostering a sense of community among a group of students, many of whom were commuter students. And so they’d go to classes and then they’d leave the campus, they vacate the premises and we would never see them hanging out and studying up by our offices or engaging in social activities that involve their peers in science and with the implementation of these TOSS workshops we saw much more of that, we saw students hanging around after hours talking more with their professors. So I saw a significant change in our sense of community particularly with our underrepresented minority students in Green Bay as a result of those TOSS workshops. It’s I would argue much easier to address the community issues in certain respects at High Point University because we’re a private institution, students are required to live on campus, there are all sorts of social activities to engage them all the time. Now whether or not they’re feeling a strong sense of community or developing a strong sense of trust with the faculty that teach their science courses or feeling a sense of belonging I think what we’ve done with growth mindset strategies is heighten all of that with our High Point University students. So they’re two very different models, I think they are very well suited for those very specific institutional contexts. I have to tell you that when I first got to High Point University I tried the TOSS workshop model and nobody came because it was optional, it was outside of class at a time where quite frankly there were lots of other social events happening on campus, extracurricular activities, students are involved in so many different clubs on campus so that just was a model that wasn’t going to fly here, we needed to do something specifically in the classroom to catch those students when we had our captive audience.

Rebecca: It’s really fascinating to hear those differences, but also recognize that you need to adjust your design based on context.

Angela: I think context is a huge deal.

John: Did you make any other changes at High Point besides adding the growth mindset prompts?

Angela: What we did within the classroom was more than simply take these intentional approaches to address the affective domain of learning through the use of growth mindset strategy. So we also very intentionally changed the way we taught our foundation biology courses by also making sure that we were employing best practices to address the cognitive domain of learning. Prior to when I had come to High Point University lecture was the primary mode of teaching in our introductory biology courses and there was a lot of turnover that happened when I arrived, there were curricular changes, the hiring of many new junior faculty just in response to the growth of our program and as we hired these new faculty we were very intentional in training them in approaches that would involve active learning in the classroom. And we worked very hard with the instructors that teach multiple sections of this introductory biology course to standardize our curriculum in a way to make sure that students were getting exposure to active learning, best practices for addressing the cognitive domain of learning in addition to those that address the affective domain of learning with the growth mindset strategies. And what we found one year when we ran simply the active learning approach where we covered at least 25% of the material in those introductory biology courses with active learning strategies, but didn’t employ the growth mindset approaches. We still, with some of our underrepresented groups, still saw a gap— a performance gap. So again, while some studies have shown that active learning can help to close that gap or narrow that gap, we weren’t very successful in doing that at High Point University. Active learning alone didn’t do it for us and I think that’s an important point to make for whatever reason. It could be our institutional context, it could be a lot of different things, but my point is when we combined active learning with approaches that also address the emotional or affective components of learning, that’s when we hit the sweet spot. We needed to do both of those to close the performance gap.

John: What specific active learning strategies did you introduce?

Angela: Active learning strategies that are chosen by the instructors— they are individual. All of them use student-centered discussions of the primary scientific literature so that’s one of the outcomes of that introductory biology course. For us to introduce our first-year students right away to the primary scientific literature, so that’s one approach I know. Instructors, we primarily leave it up to them to choose what they want to do when we have people join our program or begin teaching our introductory biology courses. We provide them with that Bible of students under learning techniques called Collaborative Learning Techniques written by Elizabeth Barkley and her co-authors. They get that and I know a lot of them use that as their go-to manual if they want to set up an active learning strategy to cover a particular topic on any given day. They’ll employ maybe just simple think-parent shares or something more elaborate than that. That’s really up to the individual instructors to do it but we just ask that they spend about a quarter of their time doing that. We also revamped our laboratory sections associated with that course so that they are also much more student-centered. Now someone argue that laboratories by nature are active learning— I would argue that they’re not in the traditional way that they’re taught. Traditional labs are very cookbook in nature and performed in a way I would argue that don’t really actively engage the students brain. You come in, here’s what you do, settle this up and here’s what you should expect to see— that’s not at all what we’re now doing in these introductory biology courses. We are infusing scientific inquiry into pretty much every aspect of that course. Yes, covering the technical skills that they would need to be proficient cellular and molecular biologists but they’re also expected to come up with hypotheses to test and then use those technical skills as they design experiments, test their hypotheses, analyze their data, etc. So the labs also are very student-centered and, by the way, much more than 25 percent active learning happens in those labs— the majority of what happens is active learning. So when all is said and done, with the implementation of those active learning strategies along with their growth mindset messages, we’ve had a lot of success in enhancing the performance of our underrepresented students in closing the gaps.

Rebecca: One of the things that you mentioned in passing was training your faculty when they come in in active learning strategies and I imagine also these affective strategies as well.

Angela: Yes.

Rebecca: How do you employ that? Is that through a teaching and learning center on campus? Is this through your department? How does that work?

Angela: That is a good question. So when we had the addition of so many new faculty to our program— so one year, we had three new faculty. The next year, we had five new faculty. The year after that, we added two more. That’s huge turnover and again, reflects the growth that was happening in our program at the time. One thing that I did was I established a journal club— typically science departments have journal clubs where people present their data that they’re creating in lab or maybe they talk about a scientific paper. I set up a teaching and learning journal club where every other week, we would get together and talk about our recent publication in the scientific teaching and learning literature. And the new faculty who were coming in were really eager to learn this stuff because they wanted to improve their student outcomes and they wanted to be on board and teach in a way that was equitable within their classroom. So that was very helpful in terms of sharing with new faculty the findings regarding the evidence-based literature about the impact of these teaching practices on their student outcomes. So that was a really effective way to do it. Then that was the first year or two when everyone was on board and then, as happens, once people get their research programs established and then they start getting on committees, it became less and less likely that we would have a good turnout with our teaching and learning journal clubs. So my latest trick now is I just started a podcast that’s really focused on teaching in STEM and I’m calling it The Teaching Lab and my goal is to interview an innovator in STEM education every week or two who can share something about a recent publication in the teaching and learning literature that I can then share with my faculty. So it’s sort of an on-the-go professional development opportunity for them when they’re out what can their dog in the morning, they want to listen to a podcast or driving home at night because it’s becoming harder and harder as they establish themselves and now have their research programs up and running and they’re wanted in a million other areas on campus, it’s less likely that we will get together for this teaching and learning in the sciences journal. We call it our TLS Journal Club. We still have lunch together at least a couple of times a week and just bat ideas around and it’s a really wonderful community of colleagues I have here and I’m so fortunate in really having that journal club earlier on set the tone but now I’ve got to come up with other tricks because of the changes that have happened.

Rebecca: Looks like we’re all looking in the same bag of tricks, right? [LAUGHTER]

Angela: Well, it makes total sense.

John: Yeah, the asynchronous nature of podcast makes it a whole lot easier for people to fit it in their schedules.

Angela: Yes, and it’s easier then to sort of pick and choose. If there’s one podcast that eh, not so much use to me or of interest to me. I guess that’s probably more socially acceptable to skip listening to a podcast then just skip coming to a journal club and feeling the guilt associated with that.

Rebecca: You’re probably right. [LAUGHTER] What resources might you suggest for other faculty who want to work on reducing the performance gap in their classes?

Angela: It’s a good question. Just keeping up with the teaching and learning literature in the sciences. Life Sciences Education is a great resource that publishes a lot of evidence based up with respect to inclusivity, closing the performance gap, etc. I recommend Carol Dweck’s book on growth mindset to everybody and just going back and reading the classic literature by Claude Steele, the Stanford psychologist who did the original work on stereotype threat. Going back to those classic studies and looking at how profound the impact can be of stereotype threat, if you activate the threat of stereotype in a student, maybe it’s pointing out that women typically don’t perform as well as men in a math class. If an instructor just makes a statement like that before handing out an exam, it can have a profound impact on the performance of students. Just going back to read that classic literature about stereotype threat published by Claude Steele and colleagues and really thinking about how things we do in the classroom, totally unintentionally, can just have such a profound impact and how we really need to think about our messaging, and what we tell students, and the importance of building trust, and telling them that we believe in them, and also telling them you gotta work hard too, you know? And here’s how we’re gonna do that, here’s how we’re gonna progress this semester together, and here are the resources we can provide to you. Those are all wonderful resources, I think, for informing our teaching and really keeping at the fork front of our mind how incredibly important it is to address those emotional components of learning.

Rebecca: What are some of the strategies that you’ve used personally in your own classes in response to the work that you’ve done to improve inclusivity?

Angela: So I make sure, always, to be sending growth mindset messages to my students. If they don’t do well on a lab practical or don’t do well on an exam, I make sure to let them know that I believe in them and suggest to them alternative strategies or approaches that can help them to be successful. I work really hard, just generally, to foster a strong sense of community in my classroom, to invite students to come to my office hours even when there’s nothing wrong. Just come and hang out with me and let me know how your life is going. Fostering a sense of community is really important to help students feel like they belong. I try really hard to include in my classroom examples in discussions that are really culturally inclusive and that would reach a variety of learners. Right off the top of my head, those are things that I try to do on a regular basis. I try really hard to bring in speakers from underrepresented groups— scientific speakers from underrepresented groups because often, still, it’s a national phenomenon. We still see underrepresentation of certain ethnic groups within the sciences for sure. It’s less the case at least at undergraduate institutions that women are underrepresented. We’re seeing more and more women now in STEM disciplines, especially teaching at undergraduate institutions, but still we respect to scientists of color— very hard for our underrepresented minority students to find mentors who’ve had the same experiences that they’ve had. So while we work hard at High Point University to recruit faculty from diverse backgrounds and we’ve had some success in that regard, we could do better but we’ve had some success. We also try really hard to bring to campus scientists from underrepresented groups to meet with our students, to serve as role models for our students. That was very much the mode of operation for me in Wisconsin. I got some grant funding to make that happen in my students, who were able to interact with those scientists and be mentored by them, they still speak years later about what a significant impact that was on them in their life. So one example was Tyrone Hayes, who’s a very famous researcher at Berkeley, who is the guy that discovered how the pesticide atrazine is causing malformations and sex reversal in frogs and so forth. He is an African-American scientist who we had come and meet with our students in Wisconsin, talk about his research of course, but also just spend a lot of time during his visit hanging out with students and sharing his experiences and that really had a profound impact on them that they still talked about to this day.

Rebecca: Sometimes it’s things that seem so little to us that are so big to our students.

Angela: Absolutely.

Rebecca: We had a lot of interesting and exciting things going on but we always ask: what’s coming next?

Angela: What’s coming next? [LAUGHTER] I have a lot of data that I’m analyzing right now about the impact or growth mindset strategies on men versus women. I want to see if there’s a difference in responsiveness to the messaging in our male versus female students so there’s that. We’ve also implemented some growth mindset messaging in our introductory chemistry courses and we’re looking to see if they have the same impact in the chemistry courses as well, which is enough for now truthfully.

Rebecca: It seems like plenty.

Angela: Beyond that, you know we now have adopted growth mindset approaches as our campus quality enhancement proposal. So I made a pitch to our faculty when it came time to choose a new QEP topic that growth mindset strategies would really be an effective approach for addressing some of the problems that faculty/staff had identified that they encountered with students that frustrated them that they would like to see some approach adopted at our institution to address. Like students shying away from challenges or not really digging in when they were bumping up against obstacles within the classroom. Growth mindset strategies are incredibly effective for changing the way that students approach problem solving in the way that they feel about failure or challenges. It really helps them to be more resilient. Messaging itself is incredibly effective, we knew that that was the case in the k-12 setting based on Carol Dweck and colleagues work but now we and others are finding that in higher ed classrooms, likewise, they have a profound effect. So the growth mindset strategies and the messaging and the impact on a variety of student learning outcomes are now being employed High Point University campus pretty much universally in a variety of different disciplines and even within student life.

John: Well thank you, you’ve offered some wonderful suggestions that I think would be useful not just in the STEM fields, but in all of our disciplines.

Rebecca: Definitely. Thank you so much for your time and great research that you’re doing. I look forward to hearing what else you find.

Angela: Thank you so much for having me Rebecca and John.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Theme music by Michael Gary Brewer.

37. Evidence is Trending

Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode, Michelle Miller joins us to discuss recent research that will likely shape the future of higher education.

Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.

Show Notes

Rebecca: Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode we talk to a cognitive psychologist about recent research that will likely shape the future of higher education.
[Music]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[Music]

John: Our guest today is Michelle Miller. Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.
Welcome, Michelle!

Michelle: Hi, I’m so glad to be here.

Rebecca: Thanks for joining us.
Today’s teas are:

Michelle: I’m drinking a fresh peppermint infused tea, and it’s my favorite afternoon pick-me-up.

Rebecca: …and it looks like it’s in a really wonderfully designed teapot.

Michelle: Well, thank you… and this is a thrift store find… one of my favorite things to do. Yeah, so I’m enjoying it.

John: I have Twinings Blackcurrant Breeze.

Rebecca: …and I’m drinking chai today.

Michelle: Pretty rough.

John: We invited you here to talk a little bit about things that you’ve been observing in terms of what’s catching on in higher education in terms of new and interesting innovations in teaching.

Michelle: Right, that’s one of things that I really had the luxury of being able to step back and look at over this last semester and over this last spring when I was on sabbatical… One of the really neat things about my book Minds Online, especially now that it’s been out for a few years, is that it does open up all these opportunities to speak with really engaged faculty and others, such as: instructional designers, librarians, academic leadership, educational technology coordinators… all these individuals around the country who are really, really involved in these issues. It’s a great opportunity to see how these trends, how these ideas, how these innovations are rolling out, and these can be some things that have been around for quite some time and just continue to rock along and even pickup steam, and some newer things that are on the horizon.

John: You’ve been doing quite a bit of traveling. You just got back from China recently, I believe.

Michelle: I sure did. It was a short visit and I do hope to go back, both to keep getting involved in educational innovations there and, hopefully, as a tourist as well. So, I was not there for very long but I had the opportunity to speak at Tsinghua University in Beijing, which is a really dynamic institution that’s been around for about a hundred years. For a while in its history it specialized in things like engineering education polytechnic, but now it’s really a selective comprehensive university with very vibrant graduate and undergraduate programs that are really very relatable for those of us in the United States working in similar contexts. My invitation was to be one of the featured speakers at the Future Education, Future Learning Conference, which was a very interdisciplinary gathering of doctoral students, faculty, even others from the community, who were all interested in the intersection of things like technology, online learning, MOOCs even, and educational research (including research into the brain and cognitive psychology), and bringing all of those together… and it was a multilingual conference. I do not speak Chinese but much of the conference was in both English and Chinese and so I was also able to really absorb a lot of these new ideas. So yes, that was a real highlight of my sabbatical semester and one that I’m going to be thinking about for quite some time.

I should say that part of what tied in there as well is that Minds Online, I’ve just learned, is going to be translated into Chinese and that’s going to come out in May 2019. So, I also got to meet with some of the people who were involved in the translation… start to put together some promotional materials such as videos and things like that.

Rebecca: Cool.

John: Excellent.

Rebecca: So, you’ve had a good opportunity, as you’ve been traveling, to almost do a scavenger hunt of what faculty are doing with evidence-based practices related to your book. Can you share some of what you’ve found or heard?

Michelle: This theme of evidence-based practice, and really tying into the findings that have been coming out of cognitive psychology for quite some time, that really is one of the exciting trends and things that I was really excited to see and hear for so many different quarters I visited in different institutions… and so I would say definitely, this is a trend that is continuing and is increasing. There really does continue to be a lot of wonderful interest and wonderful activity around these real cognitively informed approaches to teaching, and what I think we could call scientifically based and evidence-based strategies. One form this has taken is Josh Eyler’s new book, called How Humans Learn: The Science and Stories behind Effective College Teaching. This is a brand new book by a faculty development professional, and a person coming out of the humanities, actually, who’s weaving together even from his humanities background everything from evolutionary biology to classical research in early childhood education to the latest brain-based research. He’s weaving this together into this new book for faculty. So, that’s one of the things that I’ve noticed and then there’s the issue which i think is another great illustration of best-known practice which is the testing effect and retrieval practice.

John: One of the nice things is how so many branches of research are converging… testing in the classroom, brain-based research, and so forth, are all finding those same basic effects. It’s nice to see such robust results, which we don’t always see in all research in all disciplines.

Rebecca: …and just breaking down the silos in general. The things are all related and finding out what those relationships are… exploring those relationships… is really important and it’s nice to see that it’s starting to open up.

John: We should also note that when you visited here, we had a reading group and we had faculty working on trying to apply some of these concepts, and they’re still doing that… and they still keep making references back to your visit. So, it’s had quite a big impact on our campus.

Michelle: This wasn’t true, I don’t think, when I first entered the teaching profession… and even to the extent when I first started getting interested in applied work in course redesign and in faculty professional development. you would get kind of this pushback or just strange looks when you said “Oh, how about we bring in something from cognitive psychology” and now that is just highly normalized and something that people are really speaking across the curriculum… and taking it and running with it in a lasting ongoing way, not just as a “Oh, well that was an interesting idea. I’m going to keep doing what I’m doing” but really people making some deep changes as you mentioned. This theme of breaking down silos… I mean I think if there’s kind of one umbrella trend that all of these things fits under it’s that breakdown of boundaries. So, that’s one that I keep coming back to, I know, in my work.

So, the idea of retrieval practice, drilling down on that one key finding which goes back a very long ways in cognitive psychology. I think of that as such a good example of what we’re talking about here… about how this very detailed effect in cognition and yet it does have these applications across disciplinary silos. Now when I go to conferences and I say “Okay, raise your hand. How many people have ever heard of retrieval practice? How many people have ever heard of the testing effect? How many people have heard of the book Make it Stick (which really places this phenomena at its center)?” and I’m seeing more hands raising.

With retrieval practice, by the way, we’re talking about that principle that taking a test on something, that retrieving something from memory actively, has this huge impact on future memorability of that information. As its proponents like to say, tests are not neutral from a memory or from a learning standpoint… and while some of the research has focused on very kind of stripped-down laboratory style tasks like memorizing words pairs, there are also some other research projects showing that it does flow out to more realistic learning situations.

So, more people simply know about this, and that’s really the first hurdle, oftentimes, with getting this involved disciplinary sometimes jargon riddled research out there to practitioners and getting it into their hands. So, people heard of it and they’re starting to build this into their teaching. As I’ve traveled around I love to hear some of the specific examples and to see it as well crop up in scholarship of teaching and learning.

Just recently, for example, I ran across and really got into the work of Bruce Kirchhoff who is at University of North Carolina – Greensboro and his area is botany and plant identification. He has actually put together some different really technology-based apps and tools that students and teachers can use in something like a botany course to rehearse and review plant identification. He says in one of his articles, for example, that there just isn’t time in class to really adequately master plant identification. It’s just too complex of a perceptual and cognitive and memory test to do that. So, he really built in from the get-go very specific principles drawn from cognitive psychology… so, the testing effect is in there… there’s different varieties of quizzing and it all is about just getting students to retrieve and identify example after example. It brings in also principles such as interleaving, which we could return to in a little bit, but has to do with the sequencing of different examples… their spacing… So, that’s even planned out exactly how and when students encounter different things that they’re studying. It’s really wonderful. So, for example he and his colleagues put out a scholarship of teaching and learning article talking about how this approach was used effectively in veterinary medicine students who have to learn to identify poisonous plants that they’ll see around their practice. This is something that can be time-consuming and very tough, but they have some good data showing that this technology enhanced cognitively based approach really does work. That’s one example. Coincidentally, I’ve seen some other work in the literature, also on plant identification, where the instructors tagged plants in an arboretum… they went around and tagged them with QR codes… that students can walk up to a plant in the real environment with an iPad… hold the iPad over it… and it would immediately start producing quiz questions that were are specific to exactly the plants they were looking at.
So, those are some of the exciting things that people are taking and running with now that this principle is out there.

Rebecca: What I really love about the two stories that you just shared was the faculty are really designing their curriculum and designing the learning experiences with the students in mind… and what students need and when they need it. So, not only is it employing these cognitive science principles, but it’s actually applying design principles as well. It’s really designing for a user experience and thinking about the idea that if I need to identify a plant, being able to identify it in this situation in which I would need to identify it in makes it much more dynamic I think for a student… but also really meets them where they’re at and where they need it.

John: …and there’s so many apps out there now that will do the plant identification just from imagery without the QR code, that I can see it taking it one step further where they can do it in the wild without having that… so they can build it in for plants that are in the region without needing to encode that specifically for the application.

Michelle: I think you’re absolutely right once we put the technology in the hands of faculties who, as I said, they’re the one to know: “Where are my students at? Where are the weak points? Where are the gaps that they really need to bridge?” and that’s where their creativity is giving rise to all these new applications… and sometimes these can be low-tech as well… or also things that we can put in a face-to-face environment… and I’d like to to share just some experiences that I’ve had with this over the last few semesters.

In addition to trying to teach online with a lot of technology, I also have in my teaching rotation a small required course in research methods in psychology which can be a real stumbling block… the big challenge course… it’s kind of a gateway course to continued progress in our major. So, in this research methods course, some of the things that I’ve done around assessment and testing to really try again to stretch that retrieval practice idea… to make assessments really a more dynamic part of the course and more central part of the course… to move away from that idea that tests are just this kind of every now and again this panic mode opportunity for me to kind of measure in sorts of students and judge them… to make good on that idea that tests are part of learning. So, here’s some of the things that I try to do. For one thing, I took time out of the class almost every single class meeting as part of the routine to have students first of all generate quiz questions out of their textbook. So, we do have a certain amount of foundational material in that course as well as a project and a whole lot of other stuff is going on. So they need to get that foundational stuff.

Every Tuesday they would come in and they knew their routine: you get index cards and you crack your textbook and you generate for me three quiz questions. Everybody does it. I’m not policing whether you read the chapter or not. It’s active… they’re generating it… and also that makes it something like frequent quizzing. That’s a great practical advantage for me since I’m not writing everything. They would turn those in and I would select some of my favorites I would turn those into a traditional looking paper quiz and hand that out on Thursday. I said “Hey, take this like a realistic quiz.” I had explained to them that quizzes can really boost their learning, so that was the justification for spending time on it and then I said: “You know what? I’m not going to grade it either. You take it home because this is a learning experience for you. It’s a learning activity.” so we did that every single week as those students got into that routine.

The second thing that I did to really re-envision how assessment testing and quizzing worked in this particular course, was something inspired by different kinds of group testing and exam wrapper activities I’ve seen, particularly coming out of the STEM field, where there’s been a lot of innovation in this area. What I would do is… we had these high stakes exams at a few points during the semester. But, the class day after the exam, we didn’t do the traditional “Let’s go over the exam.” [LAUGHTER] That’s kind of deadly dull, and it just tends to generate a lot of pushback from students… and as we know from the research, simply reviewing… passing your eyes over the information… is not going to do much to advance your learning. So, what I would do is… I would photocopy all those exams, so it has a secure copy. They were not graded. I would not look at this before we did this… and I would pass everybody’s exams back to them along with a blank copy of that same exam. I assigned them to small groups and I said “Okay, here’s your job. Go back over this exam, fill it out as perfectly as you can as a group, and to make it interesting I said I will grade that exam as well, the one you do with your group, and anything you get over 90% gets added to everybody’s grade. This time it was open book, it was open Google, it was everything except you can’t ask me questions. So, you have each other and that’s where these great conversations started to happen. The things that we always want students to say. So, I would eavesdrop and hear students say “Oh, well you know what, I think on this question she was really talking about validity because reliability is this other thing…” and they’d have a deep conversation about it. I’m still kind of going back through the numbers to see what are the impacts of learning? Are there any trends that I can identify? But, I will say this: in the semesters that I did this, I didn’t have a single question ever come back to me along the lines of “Well, this question was unclear. I didn’t understand it. I think I was graded unfairly.” it really did shut all that down and again extended the learning that I feel students got out of that. Now it meant a big sacrifice of class time, but I feel strongly enough about these principles that I’m always going to do this in one form or another anytime I can can in face-to-face classes.

Rebecca: This sounds really familiar, John.

John: I’ve just done the same, or something remarkably similar, this semester, in my econometrics class which is very similar to the psych research methods class. I actually picked it up following a discussion with Doug McKee. He actually was doing it this semester too. He had a podcast episode on it. It sounded so exciting, I did something… a little bit different. I actually graded it but I didn’t give it back to them because I wanted to see what they had the most trouble with, and then I was going to have them only answer the ones in a group that they struggled with… and it turned out that that was pretty much all them anyway. So, it’s very similar to what you did except I gave them a weighted average of their original grade and the group grade and all except one person improved and the one person’s score went down by two points because the group grade was just slightly lower… but he did extremely well and he wasn’t that confident. The benefits to them of that peer explanation and explaining was just tremendous and it was so much more fun for them and for me and, as you said, it just completely wiped out all those things like “Well, that was tricky” because when they hear their peers explaining it to them the students were much more likely to respond by saying “Oh yeah, I remember that now” and it was a wonderful experience and I’m gonna do that everywhere I can.

In fact. I was talking about it with my TA just this morning here at Duke and we’re planning to do something like that in our classes here at TIP this summer, which i think is somewhat familiar to you from earlier in your academic career.

Michelle: That is right we do have this connection. I was among, not the very first year, but I believe the second cohort of Talent Identification Program students who came in, I guess you would call it now, middle school (back then, it was called junior high) and what a life-transforming experience. We’ve had even more opportunities to talk about the development of all these educational ideas through that experience.

John: That two-stage exam is wonderful and it’s so much more positive… because it didn’t really take, in my class, much more time, because I would have spent most of that class period going over the exam and problems they had. But the students who did well would have been bored and not paying much attention to it; the students who did poorly would just be depressed and upset that they did so poorly… and here, they were actively processing the information and it was so positive.

Michelle: That’s a big shift. We really have to step back and acknowledge that, I think. that is a huge shift in how we look at assessment, and how we think about the use of class time… and it’s not just “Oh my gosh, I have to use every minute to put such content in front of the students.” Just the fact that more of us are making that leap, I think, really is evidence this progress is happening… and we see also a lot of raised consciousness around issues such as learning styles. That’s another one that, when I go out and speak to faculty audiences, 10 years ago you would get these shocked looks or even very indignant commentary when you say “Ok, this idea of learning styles, in the sense that say there are visual learners, auditory learners, what I call sensory learning styles (VAK is another name it sometimes goes by). The idea that that just holds no water from a cognitive point of view…” People were not good with that, and now when I mentioned that at a conference, I get the knowing nods and even a few groans… people like “Oh, yeah. we get that. Now, K-12, which I want to acknowledge it’s not my area, but I’m constantly reminded by people across the spectrum that it’s a very different story in K-12. So, setting that aside… but this is what I’m seeing… that faculty are realizing… they’re saying “Oh, this is what the evidence says…” and maybe they even take the time to look at some of the really great thinkers and writers who put together the facts on this. They say “You know what? I’m not going to take my limited time and resources and spend that on this matching to styles when the styles can’t even be accurately diagnosed and are of no use in a learning situation. So, that’s another area of real progress.

Rebecca: What I am hearing is not just progress here in terms of cognitive science, but a real shift towards really thinking about how students learn and designing for that rather than something that would sound more like a penalty for grade like “Oh, did you achieve? Yes or no…” but, rather here’s an opportunity if you didn’t achieve to now actually learn it… and recognize that you haven’t learned it, even though it might seem really familiar.

John: Going back to that point about learning styles. It is spreading in colleges. I wish it was true at all the departments at our institution, but it’s getting there gradually… and whenever people bring it up, we generally remind them that there’s a whole body of research on this and I’ll send them references but what’s really troubling is in my classes the last couple years now, I’ve been using this metacognitive cafe discussion forum to focus on student learning… and one of the week’s discussions is on learning styles and generally about 95 percent of the students who are freshmen or sophomores (typically) come in with a strong belief in learning styles… where they’ve been tested multiple times in elementary or middle school… they’ve been told what their learning styles are… they’ve been told they can only learn that way… It discourages them from trying to learn in other ways and it does a lot of damage… and I hope we eventually reach out further so that it just goes away throughout the educational system.

Rebecca: You’ve worked in your classes, Michelle, haven’t you to help students understand the science of learning and use that to help students understand the methods and things that you’re doing>

Michelle: Yes, I have. I’ve done this in a couple of different ways. Now, partly, I get a little bit of a free pass in some of my teaching because I’m teaching the introduction to psychology or I’m teaching research methods where I just happen to sneak in as the research example will be some work on say attention or distraction or the testing effect. So, I get to do it in those ways covertly. I’ve also had the chance, although it’s not on my current teaching rotation… I’ve had the chance to also take it on as in freestanding courses. As many institutions are doing these days… it’s another trend… and what Northern Arizona University, where I work, has different kinds of freshmen or first-year student offering for courses they can take, not in a specific disciplinary area, but that really crossed some different areas of the student success or even wellbeing. So, I taught a class for awhile called Maximizing Brain Power that was about a lot of these different topics. Not just the kind of very generic study skills tip… “get a good night’s sleep…” that kind of thing… but really some again more evidence-based things that we can tell students and you can really kind of market it… and I think that we do sometimes have to play marketers to say “Hey, I’m going to give you some inside information here. This is sort of gonna be your secret weapon. So, let me tell you what the research has found.”

So, those are some of the things that I share with students… as well as when the right moment arises, say after an exam or before their first round of small stakes assessments, where they’re taking a lot of quizzes… to really explain the difference between this and high stakes or standardized tests they may have taken in the past. So, I do it on a continuing basis. I try to weave it into the disciplinary aspect and I do it in these free-standing ways as well… and I think here’s another area where I’m seeing this take hold in some different places… which is to have these free-standing resources that also just live outside of a traditional class that people can even incorporate into their courses… if say cognitive psychology or learning science isn’t their area… that they can bring in, because faculty really do care about these things. We just don’t always have the means to bring them in in as many ways as we would like.

John: …and your Attention Matters project was an example of that wasn’t it? Could you tell us a little bit about that?

Michelle: Oh, I’d love to… and you know this connects to what it seems to be kind of an evergreen topic in the teaching and learning community these days, which is the role of distracted students… and I know this past year there just have been these one op-ed versus another. There’s been some really good blog posts by some people I really like to follow in the teaching and learning community such as Kevin Gannon talking about “Okay, do you have laptops in the classroom? and what happens when you do?” and so I don’t think that this is just a fad that’s going away. This is something that the people do continue to care about, and this is where the attention matters project comes in.

This was something that we conceptualized and put together a couple years ago at Northern Arizona University with myself, and primarily I collaborated with a wonderful instructional designer who also teaches a great deal… John Doherty. So, how this came about is I was seeing all the information on distraction… I’m really getting into this as a cognitive psychologist and going “Wow, students need to know that if they’re texting five friends and watching a video in their class. It’s not going to happen for them.” I was really concerned about “What can I actually do to change students minds?” So, my way of doing this was to go around giving guests presentations in every classes where people would let me burn an hour of their class time… and not a very scalable model… and John Doherty respectfully sat through one my presentations on this and then he approached me and said “Look, you know, we could make a module and put this online… and it could be an open access within the institution module, so that anybody at my school can just click in and they’re signed up. We could put this together. We could use some really great instructional design principles and we could just see what happens… and I bet more people would take that if it were done in that format. We did this with no resources. We just were passionate about the project and that’s what we did. We had no grant backing or anything. We got behind it. So, what this is is about a one- to two-hour module that, it’s a lot like a MOOC in that it there’s not a whole lot of interaction or feedback, but there are discussion forums and it’s very self-paced in that way… so one- to two-hour mini MOOCs that really puts at the forefront demonstrations and activities… so we don’t try to convince students about problems with distraction and multitasking… we don’t try to address that just by laying a bunch of research articles on them… I think that’s great if this were a psychology course, but it’s not. So, we come at it by linking them out to videos, for example, that we were able to choose, that we feel really demonstrate in some memorable ways what gets by us when we aren’t paying attention… and we also give students some research-based tips on how to set a behavioral plan and stick to it… because just like with so many areas of life, just knowing that something is bad for you is not enough to really change your behavior and get you not to do that thing. so we have students talking about their own plans and what they do when, say, they’re having a boring moment in class, or they’re really really tempted to go online while they’re doing homework at home. What kinds of resolutions can they set or what kind of conditions can make that that will help them accomplish that. Things like the software blockers… you set a timer on your computer and it can lock you out of problematic sites… or we learned about a great app called Pocket Points where you actually earn spendable coupon points for keeping your phone off during certain hours. This is students talking to students about things that really concern them and really concern us all because I think a lot of us struggle with that.

So, we try to do that… and the bigger frame for this as well is this is, I feel, a life skill for the 21st century… thinking about how technology is going to be an asset to you and not detract from what you accomplish in your life. What a great time to be reflecting on that, when you’re in this early college career. so that’s what we try to do with the project…and we’ve had over a thousand students come through. They oftentimes earn extra credit. Our faculty are great about offering small amounts of extra credit for completing this and we’re just starting to roll out some research showing some of the impacts… and showing it in a bigger way just how you can go about setting up something like this.

Rebecca: I like that the focus seems to be on helping students with a life skill rather than using technology is just a blame or an excuse. We’re in control of our own behaviors and taking ownership over our behaviors is important rather than just kind of object blaming.

Michelle: So, looking at future trends, I would like to see more faculty looking at it in the way that you just described, Rebecca, as this is a life skill and it’s something that we collaborate on with our students… not lay down the law… because, after all, students are in online environments where we’re not there policing that and they do need to go out into work environments and further study and things like that. So, that’s what I feel is the best value. For faculty who are looking at this, if they don’t want to do… or don’t have the means to do something really formal like our Attention Matters approach, just thinking about it ahead of time… I think nobody can afford to ignore this issue anymore and whether you go the route of “No tech in my classroom” or “We’re going to use the technology in my classroom“ or something in between… just reading over, in a very mindful way, not just the opinion pieces, but hopefully also a bit of the research, I think, can help faculty as they go in to deal with this… and really to look at it in another way, just to be honest, we also have to consider how much of this is driven by our egos as teachers and how much of it is driven by a real concern for student learning and those student life skills. I think that’s where we can really take this on effectively and make some progress when we are de-emphasizing that ego aspect and making sure that it really is about the students.

John: We should note there’s a really nice chapter in this book called Minds Online: Teaching Effectively with Technology that deals with these types of issues. It was one of the chapters that got our faculty particularly interested in these issues… on to what extent technology should be used in the classroom… and to what extent it serves as a distraction.

Michelle: I think that really speaks to another thing which I think is an enduring trend… which is the emphasis on really supporting the whole student in success and what we’ve come to call academic persistence… kind of a big umbrella term that has to do with, not just succeeding in a given class, but also being retained… coming back after the first year. As many leaders in higher education point out, this is as a financial issue. As someone pointed out, it does cost a lot less to hang on to the students you have instead of recruiting more students to replace ones who are lost. This is, of course, yet another really big shift in mindset of our own, because after all we did used to measure our success by “Hey, I flunked this many students out of this course” or” Look at how many people have to switch into different majors…our major is so challenging…”

So, we really have turned that thinking around and this does include faculty now. I think that we did used to see those silos. We had that very narrow view of “I’m here to convey content. I’m here to be an expert in this discipline, and that’s what I’m gonna do…” and sure, we want to think about things like do students have learning skills? Do they have metacognition? Are they happy and socially connected at the school? Are they likely to be retained so that we can have this robust university environment?

We had people for that, right? It used to be somebody else’s job… student services or upper administration. They were the ones who heard about that and now I think on both sides we really are changing our vision. More and more forward-thinking faculty are saying “You know what? Besides being a disciplinary expert, I want to become at least conversant with learning science. I want to become at least conversant with the science of academic persistence…” There is a robust early literature on this and that’s something that we’ve been working on at NAU over this past year as well… kind of an exciting newer project that I like very much. We’ve started to engage faculty in a new faculty development program called Persistence Scholars and this is there to really speak to people’s academic and evidence-based side, as well as get them to engage in some perspective-taking around things like the challenges that students face and what it is like to be a student at our institution. We do some really selected readings in the area we look at things like mindset… belongingness… these are really hot areas in that science of persistence… in that emerging field. But, we have to look at it in a really integrated way.

It’s easy for people to say just go to a workshop on mindset and that’s a nice concept, but we wanted to think about it in this bigger picture… really know what are some of the strengths of that and why? Where do these concepts come from? What’s the evidence? That’s something that I think is another real trend and I think as well we will see more academic leaders and people in staff and support roles all over universities needing to know more about learning science. There are still some misconceptions that persist, as we’ve talked about. We’re making progress in getting rid of some of these myths around learning, but I will say… I’m not gonna name any names… but, every now and again I will hear from somebody who says “Oh well, we need to match student learning styles” or “Digital natives think differently, don’t you know?” and I have to wonder whether that’s a great thing. I mean, these are oftentimes individuals that have the power to set the agenda for learning all over a campus. Faculty need to be in the retention arena and I think that leaders need to be in the learning science arena. The boundaries is breaking down and it’s about time.

Rebecca: One of the things that I thought was really exciting with the reading groups that we’ve been having on our campus… that we started with your book, but then we’ve read Make it Stick and Small Teaching since… is that a lot of administrators in a lot of different kinds of roles engaged with us in those reading groups, it wasn’t just faculty. There was a mix of faculty, staff, and some administrators, and I think that that was really exciting. For people who don’t have the luxury of being in your persistence scholar program, what would you recommend they read to get started to learn more about the science of persistence?

Michelle: I really, even after working with this for quite some time, I loved the core text that we have in that program, which is Completing College by Vincent Tinto. It’s just got a great combination of passionate and very direct writing style. So, there’s no ambiguity, there’s not a whole lot of “on the one hand this and on the other hand that.” It’s got an absolutely stellar research base, which faculty of course appreciate… and it has a great deal of concrete examples. So, in that book they talk about “okay, what does it mean to give really good support to first semester college students? What does that look like?” and they’ll go out and they’ll cite very specific “Here’s a school and here’s what they’re doing… here’s what their program looks like… here’s another example that looks very different but gets at the same thing.” So, that’s one of the things that really speak to our faculty… that they really appreciated and enjoyed.

I think that as well we tested good feedback about work that’s come out of the David Yeager and his research group on belongingness and lay theories, and lay theories is maybe a counterintuitive term for kind of a body of ideas about what students believe about academic success and why some people are successful and others are not and how those beliefs can be changed sometimes through relatively simple interventions and when it happens we see great effects such as the narrowing of achievement gaps among students who have more privilege or less privileged backgrounds… and that’s something that, philosophically, many faculty really really care about but they’ve never had the chance to really learn “Okay, how can I actually address something like that with what I’m doing in my classroom, and how can I really know that the things that I’m choosing do have that great evidence base…”

John: …and I think that whole issue is more important now and is very much a social justice issue because, with the rate of increase we’ve seen in college cost inflation, people who start college and don’t finish it are saddled with an awfully high burden of debt. The rate of return to a college degree is the highest that we’ve ever seen and college graduates end up not only getting paid a lot more but they end up with more comfortable jobs and so forth… and if we really want to move people out of poverty and try to reduce income inequality, getting more people into higher education and successfully completing higher education is a really important issue. I’m glad to see that your institution is doing this so heavily and I know a lot of SUNY schools have been hiring Student Success specialists. At our institution they’ve been very actively involved in the reading group, so that message is spreading and I think some of them started with your book and then moved to each of the others. So, they are working with students in trying to help the students who are struggling the most with evidence-based practices …and I think that’s becoming more and more common and it’s a wonderful thing.

Rebecca: So, I really liked Michelle that you were talking about faculty getting involved in retention and this idea of helping students develop persistence skills, and also administrators learning more about evidence-based practices. There’s these grassroots movements happening in both of these areas. Can you talk about some of the other grassroots movements that are working toward, or efforts that faculty are making to engage students and capture their attention and their excitement for education?

Michelle: Right, and here I think a neat thing to think about too is just it’s the big ambitious projects… the big textbook replacement projects or the artificial intelligence informed adaptive learning systems… those are the things that get a lot of the press and end up in The Chronicle of Higher Education that we read about… But, outside of that, there is this very vibrant community and grassroots led scene of developing different technologies and approaches. So, it really goes back for a while. I mean, the MERLOT database that I do talk about in Minds Online has been trove for years of well hidden gems that take on one thing in a discipline and come at it from a way that’s not just great from a subject-matter perspective but brings up the new creative approaches. In the MERLOT database, for example, there’s a great tutorial on statistical significance and the interrelationship between statistical significance and issues like simple sizes. You know, that’s a tough one for students, but it has a little animation involving a horse and a rider that really turns it into something that’s very visual… that’s very tangible… and it really actually tying into analogies, which is a well-known cognitive process that can support the advancement of learning something new. There is something on fluid pressures in the body that was treated for nursing students by nurses, and it’s got an analogy of a soaker hose that this is really fun and is actually interactive. So, those are the kinds of things. The PhET project, P-h-E-T which comes out of University of Colorado, that has been around for a while… again, faculty-led and a way to have these very useful interactive simulations for concepts in physics and chemistry. So, that’s one. CogLab, that’s an auxiliary product that I’ve used for some time in like hundred psychology courses that simulates very famous experimental paradigms which are notoriously difficult to describe on stage for cognitive psychology students. That started out many years ago as a project that very much has this flavor of “We have this need in our classroom. We need something interactive. There’s nothing out there. Let’s see what we can build.” It has since then picked up and turned into a commercial product, but that’s the type of thing that I’m seeing out there.

Another thing that you’ll definitely hear about if you’re circulating and hearing about the latest project is virtual reality for education. So, with this it seems like, unlike just a few years ago, almost everywhere you visit you’re going to hear that “Oh, we’ve just set up a facility. We’re trying out some new things.” This is something that I also heard about when I was talking to people when I was over in China. So, this is an international phenomenon. It’s going to pick up steam and definitely go some places.

What also strikes me about that is just how many different projects there are. Just when you’re worried that you’re going to be scooped because somebody else is going to get there first with their virtual reality project you realize you’re doing very very different things. So, I’ve seen, for example, it used in a medical application to increase empathy among medical students… and I took a six or seven minute demonstration that just was really heart-rending, simulating the patient experience with a particular set of sensory disorders… and at Northern Arizona University we have a lab that is just going full-steam in coming up with educational applications such as interactive organic chemistry tutorial that is is just fascinating. We actually completed a pilot project and are planning to gear up a much larger study next semester looking at the impacts of this. So, this is really taking off for sure.

But, I think there are some caveats here. We still really need some basic research on this… not just what should we be setting up and what the impacts are but how does this even work? In particular, what I would like to research in the future, or at least see some research on, is what kinds of students… what sort of student profile… really gets the most out of virtual reality for education. Because amidst all the very breathless press that’s going on about this now and all the excitement, we do have to remember this is a very, very labor intensive type of resource to set up. You’re not just going to go home and throw something together for the next week. It takes a team to build these things and to complete them as well. If you have, say, a 300 student chemistry course (which is not atypical at all… these large courses), you’re not going to just have all of them spend hours and hours and hours doing this even with a fairly large facility. It’s a very hands-on thing to guide them through this process, to provide the tech support, and everything else.

So, I think really knowing how we can best target our efforts in this area, so that we can build the absolute best, with the resources we have, and maybe even target and ask the students who are most likely to benefit. I think those are some of the things that we just need to know about this. So, it’s exciting for somebody like me who’s in the research area. I see this as a wonderful open opportunity… but those are some of the real crossroads we’re at with virtual reality right now.

Rebecca: I can imagine there’s a big weighing that would have to happen in terms of expense and time and resources needed to startup versus what that might be saving in the long run. I can imagine if it’s a safety thing that you want to do a virtual reality experience, like saving people’s lives and making sure that they’re not going to be in danger as they practice particular skills, could be a really good investment in these… spending the resources to make that investment… or if it’s a lot of travel that would just be way too expensive to bring a bunch of students to a particular location… but you could virtually… it seems like it would be worth the start-up costs and those are just two ideas off the top of my head where it would make sense to bend all of that resource and time.

John: …and equipment will get cheaper. Right now, it’s really expensive for computers that have sufficient speed and graphics processing capability and the headsets are expensive, but they will come down in price, but as you said, it’s still one person typically and one device… so it doesn’t scale quite as well as a lot of other tools or at least not at this stage.

Rebecca: From what I remember, Michelle, you wrote a blog post about [a] virtual reality experience that you had. Can you share that experience, and maybe what stuck with you from that experience?

Michelle: Right, so I had the opportunity, just as I was getting to collaborate with our incredible team at the immersive virtual reality lab at NAU… one of the things I was treated to was about an hour and a half in the virtual reality setup that they have to explore some of the things that they had… Giovanni Castillo, by the way, is creative director of the lab and he’s the one who was so patient with me through all this. We tried a couple of different things and of course there’s such a huge variety of different things that you can do.
There’s a few things out there like driving simulators that are kind of educational… they’re kind of an entertainment… but he was just trying to give me, first of all, just a view of those… and I had to reject a few of them… I will say, initially, because I am one of the individuals who tends to be prone to motion sickness. So, that limits what I can personally do in VR and that is yet another thing that we’re gonna have to figure out. At least informally, what we hear is that women in particular tend to experience more of this. So, I needed, first of all, to go to a very low motion VR. I wasn’t gonna be whizzing through these environments. That was not going to happen for me. So, we did something that probably sounds incredibly simplistic, but it just touched me to my core… which is getting to play with Google Earth. You can spin the globe and either just pick a place at random or what Giovanni told me is… “You know, I’ve observed that when people do this, when we have an opportunity to interact with Google Earth, they all either go to where they grew up or they’ll go to someplace that they have visited recently or they plan to visit. So, I went to a place that is very special to me and maybe it doesn’t fit into either one of those categories neatly, but it’s my daughter’s University… her school… and I should say that this is also a different thing for me because my daughter goes to school in Frankfort, Germany… an institute that is connected to a Museum. So, I had only been to part of the physical facility… the museum itself… and it was a long time ago… and part of it was closer to the holiday. So, this is my opportunity to go there and explore what it looks like all over… and so, that was an emotional experience for me. It was a sensory experience… it was a social one… because we were talking the whole time… and he’s asking me questions and what kinds of exhibits do they have here… and what’s this part of it. So, that was wonderful. it really did give me a feel for alright, what is it actually like to be in this sort of environment?

I’m not a gamer. I don’t have that same background that many of our students have. So, it got me up to speed on that… and it did show me how just exploring something that is relatively simple can really acquire a whole new dimension in this kind of immersive environment. Now the postscript that I talked about in that blog post was what happened when I actually visited there earlier in the year. So, I had this very strange experience that human beings have never had before… which is from this… I don’t know whether to call it deja vu or what… of going to the settings and walking around the same environment and seeing the same lighting and all that sort of stuff that was there in that virtual reality environment… but this time, of course, with real human beings in it and the changes… the little subtle changes that take place over time, and so forth.

So, how does it translate into learning? What’s it going to do for our students? I just think that time is going to tell. It won’t take too long, but I think that these are things we need to know. But, sometimes just getting in and being able to explore something like this can really put you back in touch with the things you love about educational technology.

Rebecca: I think one of the things that I’m hearing in your voice is the excitement of experimenting and trying something… and that’s, I think, encouragement for faculty in general… is to just put yourself out there and try something out even if you don’t have something specific in mind with what you might do with it. Experiencing it might give you some insight later on. it might take some time to have an idea of what you might do with it, but having that experience, you understand it better… it could be really useful.

John: …and that’s something that could be experienced on a fairly low budget with just your smartphone and a pair of Google cardboard or something similar. Basically, it’s a seven to twelve dollar addition to your phone and you can have that experience… because there’s a lot of 3D videos and 3D images out there on Google Earth as well as on YouTube. So, you can experience other parts of the world and cultures before visiting… and I could see that being useful in quite a few disciplines.

Rebecca: So, we always wrap up with asking what are you going to do next?

Michelle: I continue to be really excited about getting the word out about cognitive principles and how we can flow those in to teaching face-to-face with technology… everything else in between. So, that’s what I continue to be excited about… leveraging cognitive principles with technology and with just rethinking our teaching techniques. I’m going to be speaking at the Magna Teaching with Technology Conference in October, and so I’m continuing to develop some of these themes… and I’m very excited to be able to do that. I’m right now also… we’re in the early stages of another really exciting project that has to do with what we will call neuromyth… So, that may be a term that you’ve turn across in some of your reading. It’s something that we touched on a few times, I think, in our conversation today… the misconceptions that people have about teaching and learning and how those can potentially impact the choices we make in our teaching. So, I’ve had the opportunity to collaborate with this amazing international group of researchers who’s headed up by Dr. Kristen Betts of Drexel University… and I won’t say too much more about it other than we have a very robust crop of survey responses that have come in from, not just instructors, but also instructional designers and administrators from around the world. So, we’re going to be breaking those survey results down and coming up with some results to roll out probably early in the academic year and we’ll be speaking about that at the Accelerate conference, most likely in November. That’s put out by the Online Learning Consortium. So, we’re right in the midst of that project and it’s going to be so interesting to see what has the progress been? What neuromyths are still out there and how can they be addressed by different professional development experiences. We’re continuing to work on the Persistence Scholars Program on academic persistence. So, we’ll be recruiting another cohort of willing faculty to take that on in the fall at Northern Arizona University. I am going to be continuing to collaborate and really work with and hear from John and his research group with respect to the metacognitive material that they’re flowing into foundational coursework and ways to get students up to speed with a lot of critical metacognitive knowledge. So, we’re going to work on that too… and I like to keep up my blog and work on shall we say longer writing project but we’ll have to stay tuned for that.

Rebecca: Sounds like you need to plan some sleep in there too.

[LAUGHTER]

John: Well, it’s wonderful talking to you, and you’ve given us a lot of great things to reflect on and to share with people.

Rebecca: Yeah. Thank you for being so generous with your time.

John: Thank you.

Michelle: Oh, thank you. Thanks so much. It’s a pleasure, an absolute pleasure. Thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Theme music by Michael Gary Brewer. Editing assistance from Nicky Radford.

30. Adaptive Learning

Do your students arrive in your classes with diverse educational backgrounds? Does a one-size-fits-all instructional strategy leave some students struggling and others bored? Charles Dziuban joins us in this episode to discuss how adaptive learning systems can help provide all of our students with a personalized educational path that is based on their own individual needs.

Show Notes

In order of appearance:

Transcript

John: Do your students arrive in your classes with diverse educational needs? Does a one-size-fits-all instructional strategy leave some students struggling and others bored? In this episode, we examine how adaptive learning systems can help provide all of your students with a personalized educational path that is based on their own individual needs.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

John: Our guest today is Charles Dziuban. Chuck is the Director of the Research Initiative for Teaching Effectiveness at the University of Central Florida, where he has been a faculty member since 1970 teaching research design and statistics. He’s also a founding Director of the University’s Faculty Center for Teaching and Learning. Welcome, Chuck.

Chuck: Oh, thank you both. I’m just so pleased to be here. The good feelings of Oswego continue… you know my growing up upstate New York and my attending Oswego. I’m really honored to do this… makes me feel good.

Rebecca: So glad to have you.

John: We’re very glad to have you.

Rebecca: Today our teas are:

Chuck: I have water from the water fountain.

John: My tea is Yorkshire Gold.

Rebecca: I have Lady Grey today.

John: We’ve invited you to join us to talk about adaptive learning. So, what is adaptive learning? Is it something new?

Chuck: Sure I’d be happy to talk about that. Let me answer that question in reverse order. It’s anything but new. I think adaptive learning has been around since we’re thinking about teaching and learning. I think if we went back to the Middle Ages we have an idea of this notion of adaptive learning. It is not new. In fact, where I came from, and what I’m in my career, I’m looking at a paper written by John Carroll called “A Model of School Learning…” where John laid out this set of equations that basically answers this kind of a proposition: if how much time a student spends learning is constant, what they will learn will be the variable; if what they learn is the constant, how much time they spend in the educational enterprise will be the variable. So that’s the basic sort of notion of that. When we define that, it kind of makes sense. If we give students a limited amount of time and anything… a course… a university… a semester… whatever you call it. It makes sense that what they acquire in that activity will be the variable. If what they learn is a constant, how long they’re going to spend in this enterprise will be the variable and we’ve all experienced that, but we’ve all butted our heads up against something for years and years and years until we finally understood it… and this is the basic notion of adaptive learning…. whatever time it takes, it takes for you to learn something… and he went further to break that down. When we say time, it’s just not how much time in the Augustan sense, it’s how much time do I spend actually in the learning enterprise and what is your perseverance… and what is your aptitude for this… so kind of modified for that. So, that makes eminent sense. That’s what adaptive learning is… basically it’s going to take whatever time you need as a student to learn this material to some sort of satisfaction level. That’s what it is. It’s a very simple concept.

John: …because students come in with very different backgrounds. Some have very rich backgrounds in some areas and weaker in others… and that’s going to vary quite a bit depending on what they’ve learned in prior courses or in their life experiences.

Chuck: Oh, absolutely. I mean I can remember my own experience walking in new courses at Oswego being virtually lost because I had no background and no experience in the course and having to really ramp up my energy and effort and use of time in order to: 1. reach the baseline in the course and go on in other courses. In adaptive learning we have students who enter courses who… don’t take this the wrong way… but fundamentally don’t need to be there. They’ve mastered the material… they can move on… and I’ll give you examples of that later. So, this gives a great deal of flexibility to the learning enterprise.

Rebecca: From the learning perspective, it sounds really wonderful. From a teaching perspective, it seems like it can be a little bit challenging – especially before some of these technologies were available.

Chuck: Nightmare. Yeah, it is. Of course, the way we’ve conceptualized this whole notion of how students learn, it really does… We’ve organized learning around this entity called the class… it’s a unit… yeah, and we call it a class… and if you think about it in terms of a course sequence, say in mathematics, the students who don’t pass their placement exam so they take intermediate algebra, and college algebra, and trig, then precalculus. You can conceive of that as these discrete units, but if you think about it, what we’re doing in our math department… we’re designing that as a series of skills across those things we’ve called classes and that makes the whole notion of teaching quite different. Really what you have to do at adaptive learning… if you begin to plan a course you really have to look at what you have taught, what the granularity of the course is, and your role as an instructor in all of that. It’s very very different and it really forces our instructors to think about what it is I’m teaching… it is much more than a syllabus. The granularity of what you teach becomes critically important.

Rebecca: How did UCF get involved in adaptive learning?

Chuck: It’s an interesting question. That was organic. You may or may not know that UCF is the fastest growing university in the country. It is one of the largest universities in the country. The good news is we’re growing; the bad news is we’re growing. [LAUGHTER] We cannot possibly build an infrastructure to house our students. So, besides adaptive learning, we have online courses, we have blended courses, we have lecture capture courses, and adaptive learning is a natural outgrowth of that sort of structure. 43% of our full-time credit hours are produced online so we’re almost a half online university… and one of the things is we became involved in this simply because one of the things that John said. Our students are very diverse. They come with very different backgrounds and even putting them in this thing called a course may not make sense for some of them in terms of their pre assessment of where they are… and the notion of adaptive learning pre-assessing them and then putting them in the proper learning sequence within a set of objectives is why we got involved in adaptive learning… to doing this… and what we did is we simply recruited our best faculty, say “Look, see if you can do this. See if you can, then kind of accommodate what students are, where they are, and how they’re learning, in a much more flexible environment. It just simply make sense for students… and given the fact that we have a diverse background…. students coming from underrepresented populations… it just makes sense at so many levels.

John: One case where we’ve done that institutionally here at Oswego is, beginning about three or four years ago, we adopted Aleks for the math requirement. It’s used for initial placement and then it’s also used for students to get up to the level needed to meet basic course requirements. So, students come in and then if they don’t meet the requirements for the courses they’d like to take, they can spend the summer working on that to move up to the level… and then test themselves again to basically get to the level they need to be to make good progress… and it’s been working very well from everything I’ve heard, but we haven’t done much more yet at an institutional level.

Chuck: Yeah, well there are, as you probably know, with technologies like this there’s a probably a new platform coming out every day. I mean this as a metaphor. Our math department uses Aleks and if you go into an Aleks class… and Aleks is a pre-designed course, there is some flexibility but it’s a course that’s pre-designed. These platforms come in basically two varieties: where courses are embedded in them like stuff with McGraw-Hill or Knewton or something like this, or content agnostic like Realizeit where you have to build the course. That’s two kinds of things. We have an Aleks course in college algebra, and if you look at it, it kind of looks like chaos. Students are coming in, there are people lecturing on the boards, there are student assistants helping them, they’re on the thing. When they master it, they master it and they leave. So, any concept of a lecture course is completely out the window, but what’s happening is: students are being assessed, they’re being reassigned to different areas within the Aleks platform to master the skills, and then moving on. So, some of them can complete a course in half a semester… and I assume you’re experienced at seeing this at Oswego, but one of the things we’re facing is where with adaptive learning, we’re trying to get over the teach then test model for a course. We’re trying to embed the learning assessment within the adaptive learning platform. So, we have a test-free course where essentially assessment becomes part of the learning platform assessment… actually becomes part of the teaching… and it’s really quite exciting, but it is very daunting to do. We’re kind of wedded to tests in our environment.

John: I believe that you’ve been working with Colorado Technical University.

Chuck: Yeah!

John: How did that partnership come about?

Chuck: Organically.

[LAUGHTER]

When we came to realize that adaptive learning may offer some advantages for us, we asked several vendors to come in and make presentations to faculty… and this was basically a faculty decision. They saw several platforms and they said “Realizeit is the one. This is the one… because we want the flexibility to build our own courses.” But, of course, there’s a Chinese proverb: “Be careful what you ask for….” and they got Realizeit… but now they had to build the courses. That’s daunting when you start building your courses. Realizeit’s been very helpful. They have a process called ingestion where they’ll try to take whatever you have in a course… and I’ll say this and I say this with love and kindness… all of these platforms have a bit of clunk associated with them. They all work, but they are all some problems associated with them and problems in quotes. I’ll say challenges, or in Provost speak, they have opportunities associated with them. [LAUGHTER] …and we work with them… but in doing this kind of thing, but we began presenting this with our partner (and I’ll talk about our research relationship with RealizeIt in a little bit)… but working with RealizeIt in some research that we began presenting and Colorado Technical University has been doing something that we’re not able to do yet, that is scale it. Scaling it is an issue. What they have done is, they’re a private for-profit institution, and they’ve scaled it at a remarkable kind of thing and thrown a lot of resources at it. So, we’re very good at research, they’re very good at scaling, and Realizeit is very good at research. So, when we saw them presenting and they heard us presenting, it was sort of a speed date. They said “hey, you’re doing pretty cool stuff” and we said to them “hey, you’re doing pretty cool stuff, you want to play?” …and that’s how it started… working with them and it’s led to several publications and several presentations, and then with RealizeIt looped into this, working in the background, we’ve been able to do this kind of thing where we think partnerships are really important in studying this… and of course I’m gonna recruit Oswego today. You should be joining us.

John: I’ve been looking at adaptive learning platforms for a few years.

Chuck: Yeah.

John: I’ve worked with some from the publishers but it would really be nice to build something from the ground up. So, we’ll talk about that perhaps a little bit more later.

Chuck: Alright. Well, I will say this. Our philosophy is we give away everything. You can have anything we have. We’ve published several papers. We have several projects underway now with adaptive learning. Any of your audience, and certainly my brothers and sisters at Oswego can have anything UCF has done. The idea is we need to do it for partnership. I did a presentation on, if you go to Google and search “the Grand Cafe.” The Grand Cafe was the first coffee house in England, and what it was was a result when the British discovered coffee houses ideas really blossomed. The Brits didn’t drink water for a long time because they were afraid of the water, and so for a couple of hundred years they basically started having a beer for breakfast.

John: Right.

Chuck: A couple of beers at mid break, and then a beer and a gin. They spent a couple hundred years drunk.

John: ….and so did the founding fathers.

Chuck: Yeah, absolutely. But when they discovered coffee houses… a coffee and tea (which you’re drinking)… they were in this space they were in this learning space and when you switch from a depressant to a stimulant there are many more ideas and that’s the grand Enlightenment… but the point being that it was a partnership. When we talk about these grand ideas they are not Eureka moments, they are not lightbulb moments, they are people working together creating ideas and, if you will, letting ideas have sex… in the idea that these ideas can grow and be developed and that’s what happened…. and that’s what happened with our partnership and now we’ve added another partner. We’ve added a Petroleum Geo-Services in Oslo, Norway. It’s an exploration company looking for oil all through the world, and suddenly they have to train these people aboard ship and they looked at this and said: “Whoa, it costs us a fortune to bring these geophysicists back to Oslo to train them. I wonder if we can do it aboard ship with adaptive learning.” So, we’re looking at that. So, there are a lot of possibilities and I’ll say we worried about the students adapting to adaptive learning. No problem. Of course, it’s embedded in our LMS so they don’t even know they’re not in Canvas. They still think they’re in Canvas.

Rebecca: So, you alluded a little bit to Realizeit’s potential, because you mentioned that it’s one of the few platforms where you can build courses from the ground up. Can you expand a little bit more about its strengths relative to some other platforms and maybe also some of the stumbling blocks associated with building something from the ground up?

Chuck: I’m not sure I know the characteristics of all the platforms. I know that Realizeit is content agnostic, and I’m sure there are other platforms that allow you to do that… and probably there are other platforms available too, as you begin to explore this where courses are assembled but they have some flexibility for faculty members to take components in and out. I would think that would be a real advantage. It’s sort of the continuum… building of course from ground up is daunting and one of the problems is the granularity of the course. Getting this level of granularity so the course flows evenly…. it’s always a rough start in terms of doing that and we’ve provided as much support we can for faculty.

John: How can faculty get started using adaptive learning? Is this something that’s best done at an individual level, or at a departmental or an institutional level?

Chuck: It’s a big task and whatever support you could provide… especially instructional design support… in terms of looking at a course, redesigning it, and putting it together, and that’s very important… and run-throughs are very important, test beds for courses before you roll them out. I, in my role having to evaluate our technology, I taught online, blended, lecture capture, adaptive, and each time I rolled it out… it’s been terrible. It has a lot of bumps along the road and our working two years with adaptive learning…. they’ve had some issues and by our work with them for two years, we have helped them clean up a lot of issues… and they would be the first to admit that. They help out on the research side of this. They’re very very strong. They can do research things we can’t. We can do research things they can’t do. It’s the perfect yin and yang… and this is a wonderful kind of relationship to have…. and Michael Feldstein and Phil Hill and the MindWires e-literate have built this project now they call the Empirical Educator where they’re trying to get vendors and academics together to work together… to begin to look at this.

John: We will include links to the articles that you’ve mentioned in the show notes.

Chuck: Absolutely. Okay.

John: The question I had about that (in terms of individual development or other forms of development) was basically whether people are developing their own variants of the course or is it done at the departmental level? Or the program level?

Chuck: Both.

John: Okay.

Chuck: How do we begin an adaptive learning? One of the things you have to do to demonstrate to faculty that it can be successful. So what we have done is we cherry-picked our best faculty, the kind of faculty that if you show them this platform, they’d say, “hey, we’d like to try this.” So our initial pilot study was done in Psychology, was done in College Algebra and Pathophysiology. And all of these faculty are the ones who would say, “I would try to do this anyway,” and we built it that way and then began demonstrating it with individual faculty. And now we have a pilot project in our College of Business to make as many courses as we can adaptive learning. So it’s now up to the college level and we are now making presentations to all of the individual departments saying, “look, this is the possibility— is this a strategic initiative for your department? Do you think this would be something that would be a value to a department? A value add to your department versus one-off courses?” Because, for instance, in Psychology, the Psychology professor developed it over three semesters and faculty really do have the option of how much they can lock down this course. You can lock down an adaptive learning course so it looks like a regular course or you could make it go completely adaptive. It’s pretty scary stuff and what Jeff did in Psychology is his third go-around, he said, “it’s adaptive, go!” That’s all there was to it. They could go through at any pace they wanted to. In a cohort of say 20% finished the course in three weeks and that sounds like heresy, okay. But they did it and they were finished. Their next thing they said, “okay, we want Psychology too, we’re ready for it” and wasn’t ready. Okay, it wasn’t ready. So what do you do with them— your students who have completed a course in three weeks. They’re done, they’re verified, they’re certified that they’ve completed the course and they have nowhere to go. So it has great consequences for building this out in terms of doing this and we work very hard. Intermediate Algebra (they didn’t make it into College Algebra) so Tammy Muse, who will be featured on 60 Minutes in a couple of weeks, has allowed them to start Intermediate Algebra when they complete [it], go directly to the Adaptive Learning College Algebra course and that cohort now is finishing the course on time and it’s really quite amazing. And what’s the objective is, well, make that cohort larger. Just the things that you were saying about Aleks, (that’s a bad word in Florida, “remediate”), but let them acquire the skills they don’t have and then go directly to the course they need. And what Realizeit will do is sequencing them back and Tammy has done an amazing thing. She has made all of her assessment items reflect the diversity of our campus with names and the diversity of the disciplines she made them have the disciplines reflect whatever their major is, whether it be Engineering or Physics or whatever. And though the problem sets “they work” all reflect those kinds of things. A lot of work, but it’s working beautifully.

Rebecca: Sounds really exciting.

Chuck: Well, on certain days, it is.

Rebecca: [Laughter] Exciting can be both scary and…

Chuck: Oh yeah, yeah yeah, yeah.

Rebecca: We talked a little bit about whether or not the platform allows for interleaved practice. How does it work, what does that look like? Can you describe one of these courses?

Chuck: As best I can.

Rebecca: Yeah.

Chuck: It looks like you’re in an LMS (Learning Management System) and you have exercises. RealizeIt has a decision engine built in and I know a good deal about the decision engine. I don’t know everything about it because these platforms are proprietary. These vendors do not like to give away their trade secrets but Realizeit is Bayesian-based. It gets prior knowledge assessment of a student, then based on that prior knowledge, it assigns a student to a location in a course and begins and then begins to assess them. It looks like an LMS, the learning can be anything from videos, to simulations, to reading, to practice exercises, to discussion boards— it can be any format that you would normally have in any particular course. Students can participate in any number of ways they are assessed and then based on the assessment, which can be (I’ll talk about that in a second), can be anything. It can be a simulation, it can be a practice exercise, it can be a performance— it can be anything. It evaluates them and based on the evaluation, it reassesses their learning path and then their sequence back into that and they can have any number of ways to go through the learning factor. So this platform is always thinking about them but it is re-assigning them and it is constantly re-assigning them to learning passes that goes all the way through. It’s kind of scary to look at in terms of what’s happening and in it then, it has for them, “you’ve mastered these, you have not mastered these kinds of things,” I wish I can tell you it works as smoothly as I just described it…

John: [LAUGHTER] But it’s getting there.

Chuck: It’s getting there and what we tend to do is overestimate the short-term impact of these things and underestimate the long-term impact— that’s something we do with all technologies. If you look at the latest MIT 10 technologies that are gonna revolutionize them, one of them is Babel Fish earbuds that instantly translate languages for you. Well, they can’t do that exactly yet, they will eventually maybe get very good at that, but that’s what it looks like. It looks like a regular LMS and if you experience it, you have this sort of seamless feeling that you’re moving through this with no real impediments to this so you kind of go through your own pace. And what it also does it learns how you best learn. If you do it best with reading, it’ll do it that way. If your best with simulations, it’ll steer you up to simulations, but it’ll steer you to them. But faculty have to prepare all of those things so “therein lies the rub” as a friend of ours in the round theatres in England would have said.

John: I looked at the Acrobatiq platform a couple years ago, I met with some of the representatives from the gardening. I haven’t looked at Realizeit very much yet, but these platforms are really good at giving students lots of retrieval practice and assessing where they have weaknesses and doing that type of [a]daptation. But one of the things we know is that interleague practice, as Rebecca mentioned, is really helpful in increasing recall when you ask students to go back and test them on things that they learned earlier in the course is really helpful and encouraging deeper long-term learning. And I asked about Acrobatiq’s ability to do that and there wasn’t any and that’s why we were a little bit curious about whether perhaps Realizeit had that ability to go back and bring in questions earlier. I know Aleks does that a little bit but most of the platform’s I’ve looked at so far haven’t.

Chuck: Well with Realizeit, students have that option. I guess if we could make interleave a verb, they had the option to interleave, I’ll give you an example of that. Pathophysiology— in the state of Florida there’s a requirement, now, most hospitals require their RN’s to become BSN’s and that’s causing some angst. Nursing is stressful enough but now nurses are given a certain timeframe to achieve their BSN’s so nurses are coming back, some of them unhappily. RNs to get their BSN’s and you’re doing it in adaptive learning platforms and online platforms in the Pathophysiology was adaptive so Julie Hinkle, who taught the course, said, “I’ve got 30-year RN’s who worked in Cardiology their whole career coming into my Pathophysiology course, taking the Cardio unit.” Well, they know more about cardio than I ever will and so within the adaptive learning platform, they simply go to the Cardio unit and test out. They’re done, ok, because they know everything but nurses are funny. If they don’t get quote “a hundred,” they’re not satisfied. So that’s their interleaving, they will go back and test themselves again and again and again and again until they get the satisfaction. High pressure, high pressure in that field. It’s not like beginning Psychology— if I pass, I pass in terms of this. And what Julia’s done is created incredible adaptive learning measurement devices. She’ll give them a series of bloodwork in blood gases that they all have to look at. I mean it’s hard and they all get different values and they all have to assess this patient and the protocol for this patient based on different values associated with and then she’s got them in discussion boards and if the values don’t make sense, the nurses, because of who they are, will go back until they get it. They had the option to do that. In this case that even you talked about her student-driven, you don’t have to force them— yeah, they have the option to do it, some will, some won’t.

Rebecca: Have you been using any open education resources as part of the content for the adaptive courses or are these all closed system faculty you’ve created? Materials…

Chuck: Our faculty, when they begin looking at it, you had this sort of adoption curve, we got the early adopters, who will do anything. When we look at our online courses— we’re very good at online teaching, we are very very good at it. However, we probably have 2,000 courses in our vault and faculty will ask me, “will I be any good teaching online?” Well, my response is, “are you any good face-to-face and if you’re not, you’re not gonna be very good online.” But we range from faculty who do things that are very text-based, faculty members who will not stop putting bells and whistles in their courses. And we say to them, “stop it, you’re being annoying. Stop with the gizmos in the course.” Somewhere, there’s a balance in this kind of stuff and obviously what you have for this preparation of course is a lot of work. I don’t know about you and Oswego, I assume everybody teaches one course and has a lot of free time [LAUGHTER]. We’re not that way at UCF. Our courses are very large, heavy teaching loads and right away, faculty say, “this is the too daunting.” So what I realize is doing in a lot of platforms are doing that are like us. We’re looking at OER (Open Educational Resources) and saying, “what of this can we ingest into this and make it available for a course.” In terms of, “can we take some of the right stuff, put it in there, and load it up so it’s ready to go” and then you can adjust it as we go along. Obviously those courses are good and if we can do that for faculty, it’s a great service because building a course ground up is daunting no matter what happened. And unless your institution provide faculty support, it’s probably too daunting for an individual faculty member to do this by themselves in my estimation.

John: Does Realizeit provide any package materials to help get people started, for example, including OER.

Chuck: Yeah they do, they provides a great deal of resources for it. They have said representatives down when things were not working as well as we’d hope. Basically all of these vendors and I’ll say they have the Veg-O-Matic, we have the platform for them to try it and it doesn’t chop vegetables equally well. That’s the partnership that’s so critical and there’s vendors are not just vendors anymore. They have to be active partners in this educational enterprise. They’re not just selling us stuff— they have to come and help us and Realizeit is very good about that. And others are, too. I’m not pitching Realizeit by any stretch of the imagination.

John: But it sounds like it’s worked pretty well for you guys.

Chuck: Yeah, it’s worked pretty well for us. Our issues are, “how do we scale it, how do we scale it?”

John: What sort of resistance has ever been from faculty? Is it mostly just a time issue or there are other issues that faculty are concerned about?

Chuck: I can probably give you a less of them. One: it’s a lot of work. For all faculty, I think one of the questions are, I’m not sure of the culture and Oswego, but at UCF, it still remains get-ahead equals teaching service and research and we know what of those three carries the great weight.

John: Yes, that’s an issue that’s discussed all the time.

Chuck: So if I’m gonna spend all of my time in preparation of this kind of enterprise, where’s the reward system within the culture of the University for this? And I don’t mean to be crass but what’s in it for me? Why should I do this? And we have to provide some sort of reward perfectly. We’ve done a lot, we have sowed awards that rewards faculty thing, but we tried by giving them course releases and all of the help we can have. We have a large staff that support faculty in the online environment and we have a bank of instructional designers that help faculty members load up their adaptive learning course. And basically our philosophy is if faculty can feel better about their teaching, they’ll follow you anywhere. I mean that’s yet— you know we all want to be the best teachers we can and it feels so good when it goes well and it feels so awful when it goes poorly and we’ve all had that experience.

John: Oh yeah.

Chuck: I am personally— I’m very fond of the quotes attributed to Augustine. I thought I understood it until I tried to teach it. Not one of us has not experienced that at one time in our position. [LAUGHTER] I’ve been talking about things that I’ve had no business talking about. I knew them, but I didn’t know how to teach them.

John: But you learn that as you go. [LAUGHTER]

Chuck: Yeah, oh yeah yeah, yeah.

John: In your recent EDUCAUSE paper on Adaptive Learning, you describe different types of student interaction with a platform. Could you tell us a little bit about that? I remember tortoise and hares and a few other animals in there. [LAUGHTER]

Chuck: Yes tortoise, hares…

Rebecca: Frogs and kangaroos.

Chuck: Frogs and kangaroos [LAUGHTER]— I’ve got to tell you, Realizeit is headquartered in Dublin, Ireland so one, it’s fun to listen to them talk. And two, they’re very bright so in doing this, what we decided, and then we’ve done this a lot, we had a research partnership and you’ve probably known this. If you begin presenting some of this stuff in tabular form and tables, your audience glazes over. In about two minutes, they stopped listening. So the idea there is, “how can we begin to portray this data in a way that’s more engaging to audiences?” And working with Realizeit and Colm Howlin, who was their Director of Research— is brilliant. We’re both big fans of Hans Rosling in Sweden, passed recently, but his approach to showing data in motion…

John: Oh, it’s wonderful, yes.

Chuck: It’s just absolutely wonderful. So we thought, “can we look at this” and what we weren’t able to do is portray the Hans Rosling Gapminder way, is looking at students traversing through an adaptive learning course of Psychology that is showing them in motion two things: what they’re achieving versus the number of activities. And that’s available from us in an animated form and then looking at them as they traverse through there, we have found that they produce certain behavioral styles and we found originally four. And the people in Ireland named this— they’re metaphors. Turned out to be animal[s]. The first one was the hare, and the hare— and remember what we’re plotting: number of learning activities against achievement level. The hare just goes zip. They start the course and they finish it in a very few days. That’s what they do— they’re finished. When I described in Psychology, that’s what happened in Psychology— they’re finished. And it sounds like heresy, but they’re finished. They’re done, ‘kay? And then the second one was the tortoise. The tortoise is the one who goes step by step by step by step by step through the course. They just progressed through the course in little increments and do this. And they get there, but they get there slowly. And then the Frog. The Frog did exactly what they would do in a regular course. There were eight modules in the Psychology course— they completed one a week. They did it week by week by week by week the way they would do it in a course. And interesting enough, when Jeff turned the cord, Jeff Cassisi, he turned the course open. There are a lot of students who did that— they went week by week by week because that’s what they’ve been taught. For them, that’s what of course was: you do it in these increments. And then the kangaroo— didn’t do a damn thing until the end of the course and then did it all in three weeks. Did it all— just zipped to the end of it and it’s fun to watch him in animation. He’s just dead in the water until the end and then he goes zip! And he’s at the end of the course. So there are four different ways that they approach it in terms of doing this and to see that in motion is very compelling. What it tells you is, at least in this course, there are various behavioral styles to the course and you have to be comfortable with them because they’ve all reached mastery and they’ve all reached mastery in some ways we wouldn’t approve of it. We wouldn’t approve of that kangaroo, would we? Doing nothing until the end of the course. That’s not right, but they finished so we have to learn how to deal with that in terms of doing this.

Rebecca: Despite the fact that many academics exhibit that exact behavior… [LAUGHTER]

John: That’s why we have deadlines. [LAUGHTER]

Chuck: Yeah, we have deadlines. Yeah, nothing is quite as motivating as fear, right? [LAUGHTER] yeah yeah yeah, but one of the things that I’ll say (is) there’s a lot of research that’s affecting us. And a really compelling book is called Scarcity, I don’t know if you’ve ever seen it, by Mullainathan, where they talk about students completely compelling to us at UCF for students coming from underrepresented populations in terms of what you think of students living close to the poverty line, what are they dealing with. They’re dealing with so many things in their life. Money, time, danger, single parenthood, finances, family, two part-time jobs, borrowing money from college. Fundamentally, when these students come to us, they’re exhibiting and expending so much cognitive energy just living life. Then when they come to us on campus, they burned up most of their cognitive bandwidth and then we put them in these lockstep courses where for some reason, they miss a class, they’re behind the power curve. We don’t design courses for these students. We firmly believe that they have as much intellectual capacity as any other students but we have not designed our University to accommodate. Adaptive learning is perfect for these students, it really is in terms of doing it so we’re trying to do that and accommodate these kinds of things. No wonder they drop out. In some cases, dropping out of a course becomes the optimal decision and what it is is their superstructure of life is so tenuous that if any one thing fails, this whole house of cards come tumbling down for these students. So they get behind a couple of classes and they’re done and we have to find a way to accommodate this. We’ve got to begin adjusting the way we organize ourselves.

John: What implication does adaptive learning have for the structure of the University?

Chuck: It has a lot. Our students are, now that they’re learning, asking really interesting questions like, “why do we need semesters” and the only answer we have is because we have semesters. [LAUGHTER] We don’t really have good answer for that and we’ve done this an adaptive learning we designed it. Now if you finish early, well you go on to the other course. The other end of adaptive learning is so what if you need an extra three weeks to complete the material and the semester is over? But that’s a nightmare for Financial Aid, it’s a nightmare for the Registrar, but we’re working on that in terms of doing this. But it hasn’t potentially turned on University structure on its head and I’m not sure we’re ready for that at the moment.

Rebecca: When a faculty member creates one of these courses, obviously they’re very involved in that content structure, but what’s their role in facilitating the course?

Chuck: Mm-hmm.

Rebecca: So if the class was to extend three weeks past for a particular student, does that mean then that faculty member is also engaged for that three weeks or what does that look like?

Chuck: Absolutely. In for a penny, in for a pound. If you’re involved with this kind of thing, if a student is still working within the platform and we’ve made agreements with the Registrar that they would get credit for the semester in which they did this, although they went past all of the due dates, a faculty member is still involved. So it changes the role of the faculty member immensely in terms of doing it. Some students (this is gonna come out wrong but), don’t need the faculty member. I mean, they need the faculty member, of course, I need the faculty member. This by no means abrogates the role of a faculty member, but it certainly does change it. Faculty members have to know when to intervene, when not to intervene— it changes us. It’s like teaching online: you have difficulty adapting to this. When I taught an online course, I did have difficulty adapting to my role, not being the center of attention all of the time. It was hard I have a big ego and it was difficult for me but but I got over it.

Rebecca: Are the assessments related to the adaptive learning stuff that’s like automated or something that the faculty member is manually grading using rubrics or things like that?

Chuck: Both, they are both. They get up to the faculty member we have a department who is now just getting over the notion of letting the assessments run within the adaptive learning platform and believing the assessments that it is as competency-based and students exhibited competency. That’s a hard sell for departments. Say a math department who say, fine, but they still have to take the test. You know what I’m saying? It’s a slow-moving thing and what we have done now is an a/b study in terms of demonstrating the fact that students who are assessed within the adaptive prep won’t do as well as students who were in other courses and they took the departmental exam and did equally well. We have to demonstrate that. It’s a hard sell. It’s very scary. It is a very scary phenomenon. Yes, you can design all kinds of platforms and you can build all kinds of intermediate testing devices, you can give tests within the platform and if students do well on the test, it can be, automated is the wrong word, it can be Bayesian decision to cycle them back to where they need to exhibit their skills and you can retest them. So it’s this continual cycling kind of thing.

Rebecca: I have a really easy time imagining how this would work for a knowledge gaining sort of class, maybe an a lower level of course, but a harder time envisioning when it might be like in an upper division class or one that might be more project-based or application based. What’s the experience been on your campus in terms of introductory level classes versus upper level classes giving adaptive learning a chance?

Chuck: That’s a good question. Question posted versus how would you teach Macbeth in adaptive learning? How would you do that? How would you teach clinical psychology in an adaptive learning course? The answer is that probably adaptive learning is not equally well suited for all disciplines or all levels. One is, adaptive learning is really really suited very well for hierarchical structured courses where achieving something at one level depends on achieving something at a slightly lower level. Like sequencing in math, or chemistry, or physics, or computer science. You can do it and we get it beginning psychology but is it really necessary? When you have eight modules in beginning psychology and there is a natural organic order but they don’t necessarily depend on each other. Now, Young came after Freud but, is it really dependent? That’s the kind of thing, I think there are some areas where it’s much better suited, and your question is well taken. I think we have to do a lot of exploration in terms of where adaptive learning is most suited and fits into our curriculum and it may not well fit equally well across all disciplines, all ecology. It’s a question we’ve got to do a lot of work on. Hopefully SUNY Oswego will answer most of those questions. [LAUGHTER]

John: Next semester. It’ll take a little while.

Chuck: Yeah, right. Right.

Rebecca: Do any of the classes involve, that you’ve been highlighting on your campus, have writing as a key component? I’m just curious.

Chuck: Yeah. I think probably in psychology there would be some writing involved in it. They would have to do some reaction papers and do that, yeah. So I think I can comfortably answer that question as, yes, there have been. It’s equally possible. The question is, can you teach creative writing in an adaptive learning course? In some ways it’s equally suited to it because I can imagine you could build a pretty good workshop in create a writing course, with a lot of work, but you can do it.

John: I looked at RealizeIt’s website and it said they create unique formative assessment items based on instructor provided question templates. How does that work?

Chuck: It works very well. In terms of doing it, I’ll tell you one of the things, yeah Ryan Baker wrote a really good paper. The technology is developed really well but the assessment in general is still kind of heuristic, if you know what I mean by that, we’re gonna assess your competency by whether you get four or five items right. That’s the heuristic part of that. You have to design better assessment devices. What we are doing now is we have to transform the assessment paradigm in terms of what they look like, in terms of are they authentic, are they reflective, and are they contextually relevant? Students respond much better to questions that are related to the disciplines that they’re going through, and we have to develop that. We’re nowhere near that, but we’re working on that. They’re very good about helping them. You give them a template, we want to do this, we want to do this in a simulation platform, they’ll help you work with it. They’re very good about doing that, but they can’t do it all. It is a partnership.

Rebecca: We usually hear about adaptive learning in online context. Have you had any experience in a hybrid environment or an in-person environment?

Chuck: Oh absolutely. We’ve taught some blended, adaptive learning courses, and it makes absolute sense. What’s a blended course? What do faculty members, the first time they think about blended, and they think about it incorrectly. What am I face-to-face, can I offload to the online environment? That’s about the worst way to go teaching a blended course, right? The thing is you look at what are the appropriate kinds of things for these two different formats? We have this all of the time where essentially we’ve had a blended nursing courses where they do material content offline and come in to the class and essentially do problem solving. It’s basically is very, very appropriate for this notion of a flipped blended course. We’re actually having a faculty member do it in statics, in engineering, which is really exciting. You know, it’s a whole notion of some of the courses are flipped some of the courses are flopped. [LAUGHTER]. But we sort of look at that and it makes kind of sense. But again, back to your question, is where is this modality most appropriate? Where does it fit? That’s the kind of question, and that’s really a departmental discipline sort of thing. Yeah, every discipline believes they are unique, right? Yeah, their pedagogical issues are unique to it. Yeah right [LAUGHTER].

John: Actually, along those lines we’ve had a number of reading groups here and we’ve had faculty from different departments get together and the faculty who are new to this type of thing expressed surprise at how common the concerns that they had were. To find out that people in other disciplines, very different disciplines, faced exactly the same problems and sometimes had some really good solutions.

Chuck: Yeah, we do this thing, I’m very fond of a concept by the sociologist, Susan Lee Star, she called it a boundary object. A boundary object is something like this, we do it all the time at UCF; critical thinking. We’re really into active learning now at the moment and active learning is another boundary object. We’ll do this say with the Faculty Senate, who’s in favor of critical thinking? Of course every hand goes up [LAUGHTER]. You wouldn’t dare say you’re not in favor of critical thinking. But when you get in a large community of practice nobody agrees what it is, right?

John: Exactly.

Rebecca: Yeah.

Chuck: Exactly. So, Susan Lee Star, this boundary object, is something like that. That it holds a community of practice together, but it’s very weak. It’s not strong enough to be really functional in a large community of practice but you go back to individual constituency, go back to physics, or rhetoric, or creative writing, or education, and they damn well know what critical thinking is in their discipline, right?

John: Right.

Chuck: They do. They absolutely can do it and they’re very powerful. When you bring it back to the community you’re back into the same dog fight and that’s a very powerful concept. I can name literally dozens of them: active learning, critical thinking, online learning, you go on. Very, very powerful.

Rebecca: I find writing is one that bubbles up. What do we mean by writing? What does that look like?

Chuck: Oh yeah. What is writing look like when you’re tweeting— are you writing?

Rebecca: Yeah, exactly.

Chuck: When you’re blogging, are you writing? How do you workshop writing now? Are workshops necessary? It’s all fascinating to me, but I guess that’s because we’re academics. If it weren’t for boundary objects we wouldn’t have anything to do. [LAUGHTER].

Rebecca: You start talking a little bit earlier about the implications of adaptive learning at a university. Can you expand upon that a little bit? [LAUGHTER]

Chuck: Sure. I mean the implications is; what does this say for the structure of the university? What does this say for the way when we’ve organized this enterprise called learning? There’s another great boundary object: student learning, student learning outcome, one of my favorites [LAUGHTER]. Go ahead, define that. My friend, Anders Norberg, from Sweden, we’ve written a paper called, a Time Based Model of Blended Learning. Where time becomes a fundamental design structure of a university and it’s virtually very very different from the way we organize learning at the moment and how we have organized learning is in the sense of discrete units called classes, called units, called semesters, called years and called matriculation period time. It has tremendous implications for that kind of structure. We are employed by all of these kinds of things and we’re organized by all of that so sooner or later we’re going to have to re-examine all of that, if we’re going to adopt these things and adapt these things and be environmental like that. There are three things I think associated with good ideas. One is was the adjacent possible, what’s the next reasonable step, it’s what’s next, what can we reasonably accomplish next, that’s the adjacent possible. Outside of that is the adjacent impossible, you can’t do it, you can’t do it. Secondly, you have to have the slow hunch, you have to stay the course. How many things have we done this but we’re gonna try it, it didn’t work right away and then we dumped it? I’m sure nobody at SUNY Oswego has done that, but we certainly have done that at UCF. You got to stay the course! If you read Darwin’s autobiography, he said “Yet a Eureka moment about natural selection when true”. If you carefully looked at his notes he founded months and years for it was in his notes he just didn’t know it was no Eureka moment, that’s stay the course. If you know this, you got to stay the course because it’ll be bad times, everything’s not gonna work, it’s just not going to be the way. Again, you over expect short term, you under expect long term. Then the third thing is you gotta have this liquid network, you’ve got to work with the vendors, you got to work with faculty, you got to work with administrators. SUNY has to work with UCF that has to work with CTU, that has to work with Carnegie Mellon, that has to work with Oleum Geo Services. You have to have this liquid network where we can share ideas. We can’t do it alone, frankly you can’t do it alone, we’ve got to work that way and if we work that way it’s going to change the whole way we do with the business. I guess the question for us is how much do we want to change? It’s way above my pay grade to change a University and I expect for you too, but sooner or later we’re gonna have to accomplish that there’s a lot of implications for the University.

Rebecca: There’s a lot that I’m hearing you talk about, that reminds me of agile design practice. That’s made a big boom in technology and design. The idea of small little sprints break a big problem into smaller problems that you can work towards. And then it’s iterative, you keep going back and it’s circular it’s not a straight line, that’s what I’m hearing you say. That’s the way that it needs to be tackled.

Chuck: I think that’s a good metaphor, we really be in an agile scrum and keep doing it until we get it right. That’s frankly not been our history right, part of our history is let’s declare a victory and move on to something else. I mean that in the kindest kind of way but we have some serious issues in my judgment. We have tremendous educational inequality in this country, unless we crack that, we’ve got some issues that we’re never going to solve.

John: And you guys are doing quite a bit on all those fronts and I hope we’ll have you back for some future podcasts to talk about some of those things.

Chuck: We’ve made some amazing breakthroughs in communities that you think wouldn’t achieve. The talent pool is as deep as anywhere else, but we’ve got to figure out how to let it up.

Rebecca: I think the other thing that I was hearing and what you’re talking about is the idea of these micro credentials is also surfacing that we had a prior podcast on. I really can see the adaptive learning move at the same pace as micro-credentialing because I think they’re directly related to one another and how the system might have to shift.

Chuck: Kahneman wrote that great book “Thinking Fast and Slow” and kind of admitted in terms of one of our habits is we attempt to solve a hard problem; we try to measure outcomes, we can’t figure out what to measure so we measure something else that we know how to measure. I’ll give you a great example “student learning outcomes”. We’re not too good at that, so we measure grades we measure course success, course success is not learning outcomes, we do an easier thing because we don’t know how to do the hard thing. I do that all the time! [LAUGHTER]

John: We all do that… I think.

Chuck: Yeah, I really love Oswego.

John: We enjoy it too, I’ve been here since 1983.

Rebecca: I came back.

John: That’s right! Rebecca was a student as well.

Chuck: Rebecca we knew you’d come crawling back…[LAUGHTER]… Where does the time go John? What happened?

John: I know, I just got here it feels like.

Chuck: Remember John Lennon? That song “My Boy My Beautiful Boy” that’s what happens when you’re busy making other plans.

John: Yes.

Rebecca: All the talk of time leads to the question of, Mike what does the future hold?

John: What are you doing next?

Chuck: What are we doing next? We’re reinventing the University, that’s what we’re doing. We have a new president, president designate Dale Whitaker who’s been our Provost. Dale is a very big thinker and he’s developing ideas like zero probation, all services, the hub for faculty, the faculty center and all services faculty be located in one kind of place that students become an active part of the instructional process. They’re no longer receptacles, if they become teachers as well we have to do that. They have a lot to teach us, it’s a different kind of world to do this. We have to understand better how our students acquire knowledge, we have to better understand the many generations that are existing on our campuses. I kind of like to get my news from “The Onion”.

Rebecca: hahaha !

John: That’s the best place these days and it’s not that far off.

Chuck: My favorite tagline in the onion is eccentric student reads entire book, we have to begin to accommodate the way students learn and use their learning devices. My graduate students at times will send me off to get a cup of coffee when they fix my technology problem. Technology for them is what Machan of the Reedy called “Living now in the InfoSphere” where information communication technologies talk to each other where we’re no longer in the loop, we’re on the loop. And I think you can understand that in terms of when you look at your Facebook page and what you were looking at on Amazon pops up on your Facebook page, you know that these technologies are talking to each other.

Rebecca: I thought they were reading my mind [Laughter]

Chuck: But the cover of last week’s Economist was “Epic Fail” and it was the Facebook F falling off the fail, lying on its back. The covered this week is AI spy, artificial intelligence spying you on the workplace, we have lots going on. I think the recent things that have happened over the weeks with Facebook give us the fact that we have some serious examination of our culture and our information to confront in the decades to come. I think the future is being defined for us by forces outside of our realm. This is pretty scary stuff for me. Thank you both, you guys are great. I really enjoyed it, thanks for having me

John: We’ve enjoyed this tremendously.

Rebecca: Yeah, it’s really interesting to hear what you’re doing.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

26. Assessment

Dr. David Eubanks, created a bit of a stir in the higher ed assessment community with a November 2017 Intersection article critiquing common higher education assessment practices. This prompted a discussion that moved beyond the assessment community to a broader audience as a result of articles in the New York Times, The Chronicle of Higher Education, and Inside Higher Ed. In today’s podcast, Dr Eubanks joins us to discuss how assessment can help improve student learning and how to be more efficient and productive in our assessment activities.

Dr. Eubanks is the Assistant Vice President for Assessment and Institutional Effectiveness at Furman University and Board Member of the Association for the Assessment of Learning and Higher Education.

Show Notes

  • Association for the Assessment of Learning in Higher Education (AAHLE)
  • Eubanks, David (2017). “A Guide for the Perplexed.” Intersection. (Fall) pp. 14-13.
  • Eubanks, David (2009). “Authentic Assessment” in Schreiner, C. S. (Ed.). (2009). Handbook of research on assessment technologies, methods, and applications in higher education. IGI Global.
  • Eubanks, David (2008). “Assessing the General education Elephant.” Assessment UPdate. (July/August)
  • Eubanks, David (2007). “An Overview of General Education and Coker College.” in Bresciani, M. J. (2007). Assessing student learning in general education: Good practice case studies (Vol. 105). Jossey-Bass.
  • Eubanks, David (2012). “Some Uncertainties Exist.” in Maki, P. (Ed.). (2012). Coming to terms with student outcomes assessment: Faculty and administrators’ journeys to integrating assessment in their work and institutional culture. Stylus Publishing, LLC.
  • Gilbert, Erik (2018). “An Insider’s Take on Assessment.” The Chronicle of Higher Education. January 12.
  • Email address for David Eubanks: david.eubanks@furman.edu

Transcript

Rebecca: When faculty hear the word “assessment,” do they:(a) Cheer?; (b) Volunteer?; (c) Cry?; Or (d) Run away?

In this episode, we’ll review the range of assessment activities from busy work to valuable research.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

Rebecca: Today’s guest is David Eubanks, the Assistant Vice President for Assessment and Institutional Effectiveness at Furman and Board Member of the Association for the Assessment of Learning and Higher Education. Welcome, David.

John: Welcome.

David: Thank you. It’s great to be here. Thanks for inviting me.

John: Today’s teas are… Are you drinking tea?

David: No, I’ve been drinking coffee all day.

John: Ok, that’s viable.

Rebecca: We’ll go with that. We stop fighting it at this point.

David: Was I suppose to?

John: Well, it’s the name of the podcast…

David: Oh, oh, of course! No, I’m sorry. I’ve been drinking coffee all day… did not do my homework.

Rebecca: I’m having a mix of Jasmine green tea and black tea.

John: I’m drinking blackberry green tea.

David: I do have some spearmint tea waiting for me at home if that counts.

John: Okay. That works.

Rebecca: That sounds good. It’s a good way to end the day.

John: How did you get interested in and involved with assessment?

David: I wasn’t interested, I wanted nothing to do with it. So I was in the Math department at Coker College… started in 1991… and then the accreditation cycle rolls around every 10 years. So, I got involved in sort of the department level version of it, and I remember being read the rules of assessment as they existed then… and we wrote up these plans…. and I could sort of get the idea… but I really didn’t want much to do with it. This is probably my own character flaw. I’m not advocating this, I’m just saying this is the way it was. So I wrote this really nice report, and the last line of the report was something like: “it’s not clear who’s going to do all this work.” [LAUGHS] Because it sure wasn’t gonna be me… at least that was my attitude. But as the time went on …

Rebecca: I think that’s an attitude that many people share.

David: Right, yeah. As time went on, and I began to imbibe from the atmosphere of the faculty and began to complain about things, I got more involved in the data work of the university. Because, some of the things I was wanting to complain about had to do with numbers, like financial aid awards and stuff like that. So I ended up getting into institutional research, which was kind of a natural match for my training in Math… and I found that work really interesting… gathering numbers and trying to prognosticate about the future. But the thing is… as a small college institutional research is strongly associated with assessment, just because of the way things work… and so the next time the accreditation rolls around, guess who got put in charge of accreditation and assessment. [LAUGHS] So, I remember taking the manual home with all these policies that we were supposed to adhering to… and spreading everything out and taking notes and reading through this stuff and becoming more and more horrified. If it was a cartoon, my hair would have been standing up… and writing to the President saying: “You know… we’re not doing a lot of this… or if we are, I don’t know about it.” So that was sort of my introduction to assessment. And then, it was really at that point that I had to fill some responsibility for the administration on the whole college and making sure we were trying to follow the rules. So, it evolved from the faculty and not wanting anything to do with it, to turning to the dark side and being administrator and suddenly having to convince other faculty that they really needed to get things done. So that sort of the origin myth.

Rebecca: So, sort of a panic attack followed by…. [LAUGHTER]

David: Well yeah… multiple panic attacks. [LAUGHTER]

Rebecca: Yeah.

David: And then, so over the years as I get more involved with the assessment community, I started going to conferences and doing presentations and writing papers and eventually I got on the board of the AALHE, which is the national professional association for people who work in assessment… and started up s quarterly publication for them, which is still going… and so I think I have a pretty good network now within the assessment world…and have a reasonably good understanding of what goes on nationwide, but a particularly good understanding in the South because I also participate in accreditation reviews and so forth.

Rebecca: So like you, I think many other faculty cringe when they hear assessment when it is introduced to them as a faculty member. Why do you think assessment has such a bad rep?

David: Yeah, that’s the thing I’d like to talk about most. Well, part of the problem when we talk about it, and he’s and I think you’ll see this when you look at the articles in The Chronicle, in The New York Times, and the Inside Higher Ed, is that it means different things and people can very easily start talking across each other, rather than to each other… and I think in sort of a big picture… if you imagine the Venn diagram from high school math class and there’s three circles. One circle is kind of the teaching and learning stuff that individual faculty members get interested in at the course level or maybe a short course sequence level… their cluster of stuff… and then another one of those circles is the curriculum level, where we want to make sure that the curriculum makes sense and it sort of adds up to something in the courses if they’re…. calculus one, two, three… actually act like a cohesive set of stuff… and then there’s the third circle in the diagram and that’s where the problem is, I think. In the best world, we can do research… we can do real educational research on how students develop over time and how we affect them with teaching. But if we dilute that too much… if we back off of actual research standards and water it down to the point where it’s just very, very casual data collection… it’s still okay if we treat it like that… but I think what the rub becomes…. because of some expectations for many of us in accreditation, is that we collect this really informal data and then have to treat it as if it’s really meaningful, rather than using our innate intuition and experience as teachers and having experience with students. So I think the particulars… the rock in the shoe if you will… is the sort of forced and artificial piece of assessment that is most associated with the accreditation exercises.

John: Why does it break down that way? Why do we end up getting such informal data?

David: Well, educational research is hard for one thing. It’s a big fuzzy blob. If you think about what happens in order for a student to become a senior and write that senior thesis… just imagine that scenario for a minute… and we’re gonna try to imagine that the quality of that senior thesis tells us something about the program the student’s in. Well, the student had different sequences of courses than other students and in many cases… this wouldn’t apply to a highly structured program… For many of us, the students could have taken any number of courses… could have maybe double majored in something else… even within the course selections could have had different professors at different times a day… in different combinations… and so forth. So it’s very unstandardized… and bringing to that, the student then has his or her own characteristics…. like interests and just time limitations, for example… Maybe the students got a job or maybe the student’s not a native English speaker or something. There’s all sorts of traits of the individual student. Anyway, the point is that none of this is standardized. So that when we just look at that final paper that the student’s written, there are so many factors involved, we can’t really say, especially with very small amounts of data, what actually caused what. And my argument is that in the course, the professors in that discipline are in the best situation to, if they put their heads together and talked about what’s the product we’re getting out and what are the likely limitations or strengths of what we’re getting out, are in a really good position to make some informed subjective judgments that are probably much higher quality than some of the forced limited assessments… that are usually forced to be in a numerical scale like rubric ratings or maybe test scores or something like that. So I’m giving you kind of a long-winded answer, but I think the ambition of the assessment program is fine. It’s just that the execution within many many programs doesn’t allow that philosophy to be actually realized.

Rebecca: If our accreditation requirements require us to do certain kinds of assessment and we do the fluffy version, what’s the solution in having more rigorous assessment? or is it that we treat fluffy data as fluffy data and do what we can with that?

David: Right, well as always, it’s easier I think to point out a problem than it is to solve it necessarily. But I do have some ideas… some thoughts about what we could do that would give us better results than what we’re getting now. One of those is, if we’re going to do research, let’s do research. Let’s make sure that we have large enough samples… that we understand the variables and really make a good effort to try to make this thing work as research… and even when we do that, probably the majority of time, it’s going to fail somehow or another because it’s difficult. But at least, we’ll learn stuff that way.

Rebecca: Right.

David: Another way to think of it is if I’ve got a hundred projects with ten students in each one and we’re trying to learn something in these hundred projects, that’s not the same thing as one project with a thousand students in it, right?

Rebecca: Right.

David: It’s why we don’t all try to invent our own pharmaceuticals in our backyards. We let the pharmaceutical companies do that. It’s the same kind of principle. And so we can learn from people… maybe institutions who have the resources and the numbers… we could learn things about how students learn in the curriculum that are generalizable. So that’s one idea… if we’re going to do research, let’s actually do it. Let’s not pretend that something that isn’t research actually is. Another is a real oddity… That is, somehow way back when, somebody decided that grades don’t measure learning. And this has become an a dogmatic item of belief within much of the assessment community in my experience. It’s not hundred percent true but at least in action… and for example, I think there’s some standard advice you would get if you were preparing for your accreditation report: “Oh, don’t use grades as the assessment data because you’ll just be marked down for that.” But in fact, we can learn awful lot from just using the grades that we automatically generate. We can learn a lot about who completes courses and when they complete them. A real example that’s in that “Perplexed” paper is… looking at the data it became obvious that waiting to study a foreign language is a bad idea. The students who don’t take the foreign language requirement the first year they arrive at Furman look like, from the data, that they’re disadvantaged. They get lower scores if they wait even a year. And this is exacerbated, I believe, by students who are weaker to begin with waiting. So those two things in combination, they’re sort of the kiss of death. And this has really nothing to do with how the course is being taught, it’s really an advising process problem… and if we misconstrue it as a teaching problem, we could actually do harm, right? If we took two weeks to do remedial Spanish or whatever when we don’t really need to be doing that, we’re sort of going backwards.

Rebecca: We are blaming the faculty members for the things that aren’t a faculty member’s fault necessarily.

David: Exactly, right. What you just said is a huge problem, because much of the assessment… these little pots of data that are then analyzed are very often analyzed in a very superficial way… where, for example, they don’t take into account that expressed academic ability of the students who are in that class, or whatever it is you’re measuring. So if one year you just happen to have students who were C students in high school, instead of A students in high school, you’re going to notice a big dip in all the assessment ratings just because of that. It has nothing to do teaching necessarily. And at the very least, we should be taking that into account, because it explains a huge amount of the variance that we’re going to get in the assessment ratings. Better students get better assessment ratings, it’s not a mystery.

John: So, should there be more controls for student quality and studies over time of student performance? or should there be some value-added type approaches used for assessment, where you give students pre-tests and then you measure the post-test scores later, would that help?

David: Right, so I think there’s two things going on that are really nice in combination. One is the kind of information we get from grades, which mostly tells us how hard did the student work? how well were they prepared? how intelligent they are or whatever…. However you want to describe it. It’s kind of persistent. At my university the first year grade average of students correlates with their subsequent year’s grade average at 0.79. So it’s a pretty persistent trait. But one disadvantage is that, let’s say Tatiana comes in as an A+ student as a freshman, she’s probably going to be an A+ student as a senior. So we don’t see any growth, right? If we’re trying to understand how students develop, the grades aren’t going to tell us that.

John: Right.

David: So we need some other kind of information that tells us about development. And I’ve got some thoughts on that and some data on that if you want to talk about it, but it’s a more specialized conversation maybe then you want to have here.

John: Well, if you can give us an overview on that argument.

Rebecca: That sounds really interesting, and I’d like to hear.

David: Okay. Well, the basic idea is a “wisdom of the crowds” approach, in that when things are really simple… if we want to know if the nursing student can take a blood pressure reading… then (I assume, I’m not expert on this but I assume) that’s fairly cut and dried and we could have the student do it in front of us and watch them and check the box and say, “Yeah, Sally can do that”. But for many of the things we care about, like textual analysis or quantitative literacy or something, it’s much more complicated and very difficult to reduce to a set of checkboxes and rubrics. So, my argument is for these more complex skills and things we care about, the subjective judgment of the faculty is really valuable piece of information. So what I do is, I ask the faculty at the end of the semester, for something like student writing (because there’s a lot of writing across the curriculum): :”how well is your student writing?” and I ask them to respond on a scale that’s developmental. At the bottom of the scale is “not really doing college-level work yet.” That’s the lowest rating… the student’s not writing at a college level yet. We hope not to see any of that. And then at the upper end of the scale is “student’s ready to graduate.” “I’m the professor. According to my internal metric of what college student ought to be able to do, this student has achieved that.” The professors in practice are kind of stingy with that rating… but what it does is then it creates another data set that does show growth over time. In fact, I had a faculty meeting yesterday… showed them the growth over time in the average ratings of that writing effectiveness scale over four years. If I break it up by the students entering high school grades those are three parallel lines stacked with high grades, medium grades, and low grades in parallel lines. So the combination of those two pieces: grade-earning ability and professional subjective judgment after a semester of observation, seems to be a pretty powerful combination. I can send you the paper on that if you’re interested.

John: Yes.

Rebecca: Yeah, that will be good. Do you do anything to kind of norm how faculty are interpreting that scale?

John: Inter-rater reliability.

David: Right, exactly. That’s a really good question and reliability is one of the first things I look at… and that question by itself turns out to be really interesting. I think when I read research papers it seems like a lot of people think of the reliability as this checkbox that I have to get through in order to talk about stuff I really want to talk about… because if it’s not reliable then I don’t have anything I need to talk about… and I think that’s unfortunate because just the question of “what’s reliable and what’s not” generates lots of interesting questions by itself. So, I can send you some stuff on this too, if you like. But, for example, I got this wine rating data set where these judges blind taste flights of wine and then they have to give it a 1 to 4 scale rating. And this guy published a paper on it and I asked for his data. And so I was able to replicate his findings which were that what the wine tasters most agreed on was when wine tastes bad. If it’s yeah if it’s yucky, we all know it’s yucky. It’s at the upper level when it starts to become an aesthetic that we have trouble agreeing. The reason this is interesting is because usually reliability is just one number, you say how reliable is the judges’ rating and you get .5. That’s it. That’s all the information you get, it’s .5. So what this does is it breaks it down into more detail. So when I do that with the writing ratings, what I find is that our faculty at this moment in time, are agreeing more about “what’s ready to graduate….” and not really about that crucial distinction between not doing college-level writing and the intro college-level writing.

Rebecca: That’s really fascinating. You would almost think it’d be the opposite.

John: I was astounded by this. Yes. And so I got some faculty members together and asked some other faculty members to contribute writing samples that they thought some were good and some were bad. So that I have a clean set to try to test this with and watching them do it.

Rebecca: Right.

David: So yeah, we got in the room and we talked about this, and what I discovered was not at all what I expected. I expected that students would get marked down on the writing if they had lots of grammar and spelling errors and stuff like that. But we didn’t have any papers like that… even the ones that were submitted as the bad papers didn’t have a lot of grammatical errors. So I think that the standards for what the professor’s expect for entry-level writers is really high. And because it’s high, we’re not necessarily agreeing on where those lines are… and that’s where the conversation needs to be for the students sake, right? It’s never going to be completely uniform, but just knowing that this disagreement exists is really advantageous because now we can have more conversations about it.

Rebecca: Yeah, it seems like a great way to involve a teaching and learning center… to have conversations with faculty about what is good writing… what should students come in with… and what those expectations are… so that they start to generate a consensus, so that the assessment tool helps generate the opportunity for developing consensus.

David: Yes, exactly, and I think that’s the best use for assessment is when it can generate really substantive conversations among the faculty who are doing the work of giving the assignments and giving the grades and talking to students.

Rebecca: So, how do we get the rest of the accreditation crowd to be on board with this idea?

David: That’s a really interesting question. I’ve spent some time thinking about that. I think it’s possible. I’m optimistic that we can get some movement in that direction. I don’t think a lot of people are really happy with the current system, because there are so many citations for non-compliance that it’s a big headache for everybody. There are these standards saying every academic program is supposed to set goals… assess whether or not those are being achieved… and then make improvements based on the data you get back. That all seems very reasonable, except that when you get into it and you approach it as this really reductive positivist approach, it implies that the data is really meaningful when in many cases it’s not, so you get stuck. And that’s where the frustration is. So I think one approach is if we can get people to reconsider the value of grades, first of all. And if you can imagine the architecture we’ve setup, it’s ridiculous. So imagine these two parallel lines, on the top we’ve got grades and then there’s an arrow that leads into course completion… because you have to get at least a D usually… and then another arrow that leads into retention (because if you fail out of enough classes you can’t come back or you get discouraged) and that leads to graduation, which leads to outcomes after graduation — like grad school or a career or something. So, that’s one whole line, and that’s been there for a long time. Then under that, what we’ve done is constructed this parallel grading system with the assessment stuff that explicitly disavows any association with any of the stuff on the first line. That seems crazy. What we should have done to begin with is said, “oh, we want to make assessment about understanding how we can assign better grades and give better feedback to students. So they’ll be more successful, so they’ll graduate and have outcomes,” right? That all makes sense. So I think the arguments there… turn the kind of work we’re doing now into a more productive way to feed into the natural epistemology of the institution rather than trying to create this parallel system. That doesn’t really work very well in a lot of cases.

Rebecca: Sounds to me what you’re describing is… right now a lot of assessment is decentralized into individual departments… but I think what you’re advocating for is that it becomes a little more centralized, so that you can start looking at these big picture issues rather than these miniscule little things that you don’t have enough of a data set to study, is that true?

David: Absolutely, yes, absolutely. Some things we just can’t know without more data, partly because the data that we do get is going to be so noisy that it takes a lot of samples to average out the noise. So yes, in fact that’s what I try to do here…. Generate reports based on the data that I have that are going to be useful for the whole University as well as reports that are individualized to particular programs.

Rebecca: Do you work with individual faculty members to work on the scholarship of teaching and learning so maybe there’s something that in particular that they’re interested in studying and given your role in institutional research and assessment? Do you help them develop studies and help collect the data that they would need to find those answers?

David: Yes, I do when they request it or I discover it. It’s not something that I go around and have an easy way to inventory, because there’s a lot of it going on I don’t know about.

Rebecca: Right.

David: I’d say more of my work is really at the department level and this part of assessment is really easy. If you’re in an academic department so much of the time that the faculty meet together gets sucked up with stuff like hiring people, scheduling courses, setting the budget for next year and figuring out how to spend it, selecting your award students, all that stuff can easily consume all the time of all the faculty meetings. So, really just carving out a couple of hours a semester, or even a year, to talk about what is it we’re all trying to achieve and here’s the information… however imperfect it is… that we know about it, can pay big dividends. I think a lot of times that’s not what assessment is seen as. It’s seen as, “oh, it’s Joe’s job this year to go take those papers and regrade them with a rubric, and then stare at it long enough until he has an epiphany about how to change the syllabus.” That’s a bit of a caricature, but there is a lot of that that goes on.

Rebecca: I think it’s my job this year to… [LAUGHS]

David: Oh, really?

John: In the Art department, yeah. [LAUGHS]

Rebecca: I’m liking what you’re saying because there’s a lot of things that I’m hearing you say that would be so much more productive than some of the things that we’re doing, but I’m not sure how to implement them in a situation that doesn’t necessarily structurally buy into the same philosophy.

John: And I think faculty tend to see assessment as something imposed on them that they have to do and they don’t have a lot of incentives to improve the process of data collection or data analysis and to close the loop and so forth. But perhaps if this was more closely integrated into the coursework and more closely integrated into the program so it wasn’t seen as (as you mentioned) this parallel track, it might be much more productive.

David: Right, and one thing I think we could do is ask for reports on grades. Grade completions… there’s all sorts of interesting things that are latent to grades and also course registration. For example, I created these reports… imagine a graph that’s got 100, 200, 300, 400, along the bottom axis… and those are the course levels. I wanted to find out when are students taking these courses. So what you’d expect is the freshmen are taking 100-level courses and the sophomores are taking 200 on average and so forth, right? But whenever I created these reports for each major program, I discovered that there were some oddities… that there were cases for 400-level courses were being taken by students who were nowhere near seniors. So I followed up and I asked this faculty member what’s going on, and it turned out to just be a sort of weird registration situation that doesn’t normally happen, but it had turned out that there were students in that class who probably shouldn’t have been in there. And she said “Thanks for looking into this because, I’m not sure what to do.” So that sort of thing could be routinely done with the current computing power we have now. I think there’s a lot you could ask for that would be meaningful without having to do any extra work, if somebody in the IR or assessment offices is willing to do that.

Rebecca: That’s a good suggestion.

David: And so in the big picture, how do we actually change the accreditor’s mind? It’s not so much really the accreditors, the accreditors do us a great service, I think, by creating this peer-review system. In my experience it works pretty well. The issue I think within the assessment community is that there are a lot of misunderstandings about how this kind of data, these little small pools of data, can be used and what they’re good for. And so what I’ve seen is a lot of attention to the language around assessment during an accreditation review: are the goals clearly stated… it’s almost like did you use the right verb tense, but I’ve never seen that literally. [LAUGHTER] No, there’s pages of words: are there rubrics? do the rubrics look right? and all this stuff and then there’s a few numbers and then there’s supposed to be some grand conclusion to that. It’s not all like that, but there’s an awful lot of it like this so if you’re a faculty member stuck in the middle of it, you’re probably the one grading the papers with a rubric that you already graded once. And you tally up those things and then you’re supposed to figure out something to do with those numbers. So, this culture persists because the reviewers have that mindset that all these boxes have to be checked off. There’s a box for everything except data quality. [LAUGHS] No, literally… if there’s a box for data quality everything would fall apart immediately. So we have to change that culture. We have to change the reviewer culture, and I think one step in doing that is to create a professional organization or using one that exists, like in accounting and librarianship. They have professional organizations that set their standards, right? We don’t have anything like that on assessment. We have professional organizations, but they don’t set the standards. The accreditors have grown (accidentally, I think) into the role of being like a professional organization for assessment. They’re not really very well suited for that. And so, if we had a professional organization setting standards for review that were acknowledging that the central limit theorem exists, for example, then I think we could have a more rational self-sustaining, self-governing system. Hopefully get away from causing faculty members to do work that’s unnecessary.

John: I don’t think any faculty members would object to that.

David: Well, of course not. I mean, you know, everybody’s busy…. you want to do your research… you got students knocking on the door… you gotta prepare for class. And really it’s not just that we’re wasting faculty members time if these assessment numbers that result weren’t good for anything. It’s also the opportunity cost. What could we have done, researching the course completion that would have, by now in the last twenty years we’ve been doing this, saved how many thousands of students. You know there’s a real impact to this, so I think we need to fix it.

John: How have other people in the assessment community reacted to your paper and talks?

David: Yeah, that’s a very interesting question. What has not happened is that nobody’s written me saying “No, Dave you’re wrong. Those samples, those numbers we get from from rating our students are actually really high-quality data.” Now, in fact, probably every institution has some great examples where they’re doing really excellent work trying to measure student learning. Like maybe they’re doing a general education study with thousands of students or something. But, down at the department level, if you’ve only got ten students like some of our majors might have, you really can’t do that kind of work. So I haven’t had anybody even address the question and the response articles saying that, “no, you’re wrong because the data is really good” because the other conclusion if you believe the data is good – the other conclusion is that the faculty are just not using it, right? Or somebody’s not using it. So I guess the rest of the answer the question is the assessment community, I think, is rallying around the idea naturally that they feel threatened by this, and undoubtedly there are faculty members making their lives harder in some cases. That’s unfortunate. It wasn’t my intention. The assessment director is caught in the middle because they are ultimately responsible to what happens when the accreditor comes and reviews them. The peer review team, right? So it’s like a very public job performance evaluation when that happens and so it depends on what region you’re in – there are different levels of severity, but it can be a very very unpleasant experience to have one of those reviews done with somebody who’s got a very checkboxy sort of attitude. It’s not really looking at the big picture and what’s possible, but looking instead at the status of idealistic requirements.

Rebecca: So the way to get the culture shift, in part, requires the the accreditation process to to see a different perspective around assessment… otherwise the culture shift probably won’t really happen.

David: Right, we have to change the reviewers mindset and that’s going to have to involve the accreditors to the extent that their training those reviewers. That’s my opinion.

Rebecca: What role, if any, do you see teaching and learning centers having in assessment in the research around assessment?

David: Well, that’s one of those circles in my Venn diagram you recall, and I think, it’s absolutely critical for the kind of work that has an impact on students, because t’s more focused than say program assessment‘s very often trying to the whole program… which, as I noted, has many dimensions to it. Whereas, a project that’s like a scholarship of teaching and learning project or just a course-based project may have a much more limited scope and therefore has a higher chance of seeing a result that seems meaningful. I don’t think our goal in assessment in that case is to try to prove mathematically that something happened, but to reach a level of belief on the part of those involved that “yes, this is probably a good program that we want to keep doing.” So, I think if the assessment office is producing generalizable information or just background information that would be useful in that context, like “here’s the kind of students we are recruiting,” “here’s how they perform in the classroom” or some other characteristic. For example, we have very few women going into economics. Why is that? Is that interesting to you economists? So those those kinds of questions can be brought from the bigger data set down to those kinds of questions probably.

Rebecca: You got my wheels turning, for sure.

David: [LAUGHS] Great!

Rebecca: Well, thank you so much for spending some of your afternoon with us, David. I really appreciate the time that you spent and all the great ideas that you’re sharing.

John: Thank you.

David: Well, it was delightful to talk to you both. I really appreciate this invitation, and I’ll send you a couple of things that I mentioned. And if you have any other follow-up questions don’t hesitate to be in touch.

Rebecca: Great. I hope your revolution expands.

David: [LAUGHS] Thank you. I appreciate that. A revolution is not a tea party, right?

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

24. Gender bias in course evaluations

Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than professor? For some, this may sound all too familiar. In this episode, Kristina Mitchell, a Political Science Professor from Texas Tech University, joins us to discuss her research exploring gender bias in student course evaluations.

Show Notes

  • Fox, R. L., & Lawless, J. L. (2010). If only they’d ask: Gender, recruitment, and political ambition. The Journal of Politics, 72(2), 310-326.
  • MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303.
  • Miller, Michelle (2018). “Forget Mentors — What We Really Need are Fans.” Chronicle of Higher Education. February 22, 2018..
  • Mitchell, Kristina (2018). “Student Evaluations Can’t Be Used to Assess Professors.Salon. March 19, 2018.
  • Mitchell, Kristina (2017). “It’s a Dangerous Business, Being a Female Professor.Chronicle of Higher Education. June 15, 2017.
  • Mitchell, Kristina M.W. and Jonathan Martin. “Gender Bias in Student Evaluations.” Forthcoming at PS: Political Science & Politics.

Transcript

Rebecca: Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than Professor? For some, this may sound all too familiar. In this episode, we’ll discuss one study that explores bias in course evaluations.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.
Today our guest is Kristina Mitchell, a faculty member and director of the online education program for the Political Science Department at Texas Tech. In addition to research in international trade and globalization, Kristina has been investigating bias in student evaluations, motherhood and academia, women in leadership and academia, among other teaching and learning subjects. Welcome Kristina.

Kristina: Thank you.

John: Today our teas are?

Kristina: Diet coke. Yes, I’ve got a diet coke today.

[LAUGHTER]

Rebecca: At least you have something to drink. I have Prince of Wales tea.

John: …and I have pineapple ginger green tea.

John: Could you tell us a little bit about your instructional role at Texas Tech?

Kristina: Sure, so when I started at Texas Tech six years ago, I was just a Visiting Assistant Professor teaching a standard 2-2 load… so, two face-to-face courses in every semester, but our department was struggling with some issues in making sure that we could address the need for general education courses. So in the state of Texas every student graduating from a public university is required to take two semesters of government (we lovingly call it the “Political Science Professor Full Employment Act”) and so what ends up happening at a university like Texas Tech with pushing forty thousand students almost, is that we have about five thousand students every semester that need to take these courses… and, unless we’re going to teach them in the football stadium, it became really challenging to try and meet this demand. Students were struggling to even graduate on time, because they weren’t able to get into these courses. So, I was brought in and my role was to oversee an online program in which students would take their courses online asynchronously. They log in, complete the coursework on their own time (provided they meet the deadlines), and I’m in a supervisory role. My first semester doing this I was the instructor of record, I was managing all of the TAs, I was writing all the content, so I stayed really busy with that many students working all by myself. But now we have a team of people: a co-instructor, two course assistants, and lots of graduate students. So, I just kind of sit at the top of the umbrella, if you will, and handle the high level supervisory issues in these big courses.

John: Is it self-paced?

Kristina: It’s self-paced with deadlines, so the students can complete their work in the middle of the night, or in the daytime or whenever is most convenient for them, provided they meet the deadlines.

Rebecca: So, you’ve been working on some research on bias in faculty evaluations. What prompted this interest?

Kristina: What prompted this was my co-instructor, a couple of years ago, was a PhD student here at Texas Tech University and he was helping instruct these courses and handle some of those five thousand students… and as we were just anecdotally discussing our experiences in interacting with the students, we were just noticing that the kinds of emails he received were different. The kinds of things that students said or asked of him were different. They seemed to be a lot more likely to ask me for exceptions… to ask me to be sympathetic…. to be understanding of the student situation… and he just didn’t really seem to find that to be the case. So of course, as political scientists, our initial thought was: “we could test this.” We could actually look and see if this stands up to some more rigorous empirical evaluation, and so that’s what made us decide to dig into this a little deeper.

John: …and you had a nice sized sample there.

Kristina: We did. Right now, we have about 5000 students this semester. We looked at a set of those courses. We tried to choose the course sections that wouldn’t be characteristically different than the others. So, not the first one, and not the last one, because we thought maybe students who register first might be characteristically different than the students who register later. So, we took we chose a pretty good-sized sample out of our 5,000 students.

John: …and what did you find?

Kristina: So, we did our research in two parts. The first thing we looked at was the comments that we received. As I said, our anecdotal evidence really stemmed from the way students interacted with us and the way they talked to us. We wanted to be able to measure and do some content analysis of what the students said about us in their course evaluations. So, we looked at the formal in-class university-sponsored evaluation, where the students are asked to give a comment on their professors… and we looked at this for both our face-to-face courses that we teach and the online courses as well. And what we were looking for wasn’t whether they think he’s a good professor or a bad professor, because obviously if we were teaching different courses, there’s not really a way to compare a stats course that I was teaching to a comparative Western Europe course that he was teaching. All we were looking at was what are the themes? What kinds of things do they talk about when they’re talking about him versus talking about me? What kind of language do they use and we also did the same thing for informal comments and evaluation? So, you have probably heard of the website “Rate My Professors”?

John: Yes.

[LAUGHTER]

Kristina: Yes, everyone’s heard of that website and none of us like it very much… and let me tell you, reading through my “Rate My Professors” comments was probably one of the worst experiences that I’ve had as a faculty member, but it was really enlightening in the sense of seeing what kinds of things they were saying about me… and the way they were talking about me versus the way they were talking about him. So again, maybe he’s just a better professor than I am… so we weren’t looking for positive or negative. We were just looking at the content theme… and so the kinds of themes we looked at were: Does the student mention the professor’s personality? Do they say nice… or rude… or funny? Do they mention the professor’s appearance? Do they say ugly… pretty? Do they comment on what he or she is wearing? Do they talk about the competence, like how how well-qualified their professor is to teach this course and how do they refer to their professor? Do they call their professor a teacher? Or do they call their professor rightfully a professor? And these are the categories that we really noticed some statistically significant differences. So we found that my male co-author was more likely to get comments that talked about his competence and his qualification and he was much more likely to be called professor… which is interesting because at the time he was a graduate student. So, he didn’t have a doctorate yet… he wouldn’t really technically be considered a professor… and on the other hand when we looked at comments that students wrote about me, whether they were positive or negative… nice or mean comments… they talked about my personality. They talked about my appearance and they called me a teacher. So whether they were saying she’s a good teacher or a bad teacher… that’s how they chose to describe me.

Rebecca: That’s really fascinating. I also noticed, not just students having these conversations, but in the Chronicle article that you published, there was quite a discussion that followed up related to this topic as well, and in that there was a number of comments where women responded with empathetic responses and also encouraged some strategies to deal with the issues. But, then there was at least one very persistent person, who kept saying things like: “males also are victimized.” How do we make these conversations more productive and is there something about the anonymity of these environments that makes these comments more prevalent?

Kristina: I think that’s a really great question. I wish I had a full answer for you on how we could make conversations like this more productive. I definitely think that there’s a temptation for men who hear these experiences to almost take it personally… as though when I write this article, I’m telling men: “You have done something wrong…” when that’s not really the case… and, my co-author, as we were looking at these results about the comments and as we were reading each other’s comments, so we could code them for what kinds of themes we were observing… he was almost apologetic. He was like: “Wow, I haven’t done anything to deserve these different kinds of comments that I’m getting. You’re a perfectly nice woman, I don’t know why they’re saying things like this about you.” So, I think framing the conversation in terms of what steps can we take to help, because if I’m just talking about how terrible it is to get mean reviews on Rate My Professors, that’s not really giving a positive: “Here’s a thing that you can do to help me…” or “Here’s something that you can do to advocate for me.” So, I think a lot of times what men who are listening need… maybe they’re feeling helpless… maybe they’re feeling defensive…. What they need is a strategy. Something they can do going forward to help women who are experiencing these things.

Rebecca: I noticed that some of the comments in relationship to your Chronicle article indicated ways that minimize your authoritative role to avoid certain kinds of comments and I wonder if you had a response to that… and I think we don’t want to diminish our authoritative roles as faculty members, but I think that sometimes those are the strategies that we’re often encouraged to take.

Kristina: I agree, I definitely noticed that a lot of the response to how can we prevent this from happening got into “How can we shelter me from these students,” as opposed to “How can we teach these students to behave differently.” I definitely think the anonymous nature of student evaluation comments and Rate My Professors and internet comments in general. You definitely notice when you go to an internet comment section that anonymous comments tend to be the worst one. …and so the idea that what we’re observing, it’s not that an anonymous platform causes people to behave in sexist ways, It’s that there’s underlying sexism and the anonymous nature of these platforms just gives us a way to observe the underlying sexism that was already there. So the important thing is not to take away my role as the person in charge. The important thing is to teach students, and both men and women, that women are in positions of authority and that there’s a certain way to communicate professionally. Student evaluations can be helpful. I’ve had helpful comments that help me restructure my course. So, it’s a way to practice engaging professionally and learning to work with women. My students are going to work for women and with women for the rest of their lives. They need to learn, as college students, how to go about doing that.

John: Do you have any suggestions on how we could encourage that they’re part of the culture and in individual courses the impact we have is somewhat limited. What can we do to try to improve this?

Kristina: Well, I’ve definitely made the case previously to others on my campus and at other campuses that the sort of lip service approach to compliance with things like Title 9 isn’t enough. So, I don’t know if there at your institution there’s some sort of online Title 9 training, where you know…

John: Oh, yeah…

Kristina: …you watch a video

Rebecca: Yeah…

Kristina: … you watch a video… you click through the answers… it tells you: “are you a mandatory reporter?” and “what should you do in this situation?” …and I think a lot of people don’t really take that very seriously; it’s just viewed as something to get through so that the university cannot be sued in the case that something happens. So, I don’t think that that’s enough. I think that more cultural changes and widespread buy-in are a lot more important than making sure everyone takes their Title 9 training. So, in our work I mentioned that we did this in two parts, and the second part just looked at the ordinal evaluations. The 1 to 5 scale, 5 being the best… rank your professor how effective he or she is… and not only are students perhaps not very well qualified to evaluate pedagogical practices, but once again we found that even in these identical online courses, a man received higher ordinal evaluations than a woman did. And so what this tells me is in a campus culture we should stop focusing on using student evaluations in promotion and tenure, because they’re biased against women… and we should stop encouraging students to write anonymous comments on their evaluations. We should either make them non-anonymous or we should eliminate the comment section all together. Just because if we’re providing a platform it’s almost sanctioning this behavior. If we’re saying, “we value what you write in this comment,” then we’re almost telling students your sexist comment is okay and it’s valued and we’re going to read it… and that’s not a culture that’s going to foster positive environment for women.

John: Especially when the administration and department review committees use those evaluations as part of the promotion and tenure review process.

Kristina: Exactly. I mean when I think about the prospect of my department chair or my Dean reading through all the comments that I had to read through when I did this research, I’m pretty sure that he would get an idea of who I am as a faculty member that, to me…maybe I’m biased… but to me, is not very consistent with actually what happens in my classroom.

Rebecca: It’s interesting that anonymity.. right, we talk about anonymity providing more of a platform for this become present. But I’ve also had a number of colleagues share their own examples of hate speech and inappropriate sexual language when anonymity wasn’t a veil that they could hide behind, increasingly more recently. So I wonder, if your research shows any increase in this behavior and why?

Kristina: We haven’t really looked at this phenomenon over time. That’s just not something that we’ve been able to look at in our data, but I would like to continue to update this study. I definitely think that… current political climate is creating an atmosphere where perhaps people don’t feel that saying things that are racist or sexist are as shameful as they once perceived them to be. So there’s definitely a big stigma against identifying yourself as Nazi or even Nazi adjacent and that stigma, while it’s still there, the stigma against it seems to be lessening a little bit. I don’t know necessarily that I’ve seen an increase in what kinds of behavior I’m observing from my students, but I definitely will say that a student… an undergraduate student… gave me his number on his final exam this last semester like I was going to call him over the summer. So, it definitely happens in non-anonymous settings too.

John: Now there have been a lot of studies that have looked at the effect of gender on course evaluations, and all that I’ve seen so far find exactly the same type of results. That there’s a significant penalty for being female. One of those, if I remember correctly (and I think you referred to it in your paper), was a study where… it was a large online collection of online classes, where they changed the gender identity of the presenters randomly in different sections of the course, and they found very different types of responses and evaluations.

Kristina: Yes, that was definitely a study that that… I hate to say we tried to emulate because we were limited in what we could do in terms of manipulating the gender identity of the professor… but I think that their model is just one of the most airtight ways to test this. I agree, this is definitely something that’s been tested before. We’re not the first ones to come to this conclusion… I think our research design is really strong in terms of the identical nature of the online courses. At some point, I find myself… when I when I was talking about this research with a woman in political science who’s a colleague of mine… the question is how many times do we have to publish this before people are going to just believe us… that it’s the case. The response tends to be: “Well, maybe women are just worse professors or maybe there’s some artifacts in the data that is causing this statistically significant difference.” I don’t know how many times we have to publish it before before administrations and universities at large take notice… that this is a real phenomenon… that’s not just a random artifact of one institution or one discipline.

John: It seems to be remarkably robust across studies. So, what could institutions do to get around this problem? You mentioned the problem with relying on these for review. Would peer evaluation be better, or might there even be a similar bias there?

Kristina: I definitely think peer evaluation is an alternative that’s often presented, when we’re thinking of alternative ways to evaluate teaching effectiveness. Peer evaluation may be subject to the same biases. So, I don’t know that literature well enough off the top of my head, but I imagine that it could suffer from the same problems in terms of faculty members who are women… faculty members of color… faculty members with thick accents, with English that’s difficult to understand… might still be dinged on their peer evaluations. Although we would hope that people who are trained in pedagogy who’ve been teaching would be less subject to those biases. We could also think about self evaluation. Faculty members can generate portfolios that highlight their own experiences, and say here’s what I’m doing the classroom that makes me a good teacher… here are the undergraduate research projects I’ve sponsored… here the graduate students who’ve completed their doctoral degrees under my supervision… and that’s a way to let the faculty member take the lead in describing his or her own teaching. We could also just weight student evaluations. We know that women receive 0.4 points lower on a five-point scale, then we could just bump them up by 0.4. None of these solutions are ideal. But, I think some of the really sexist and misogynist problems in terms of receiving commentary, that is truly sexually objectifying female professors… that could be eliminated with almost any of these solutions. Peer evaluation… removing anonymous comments… self-evaluation…. and that’s really the piece that is the most dramatically effective in women being able to experience higher education in the same way that men do.

Rebecca: So, obviously if there’s this bias in evaluations then there’s likely to be the same bias within the classroom experience as well. We just don’t necessarily have an easy way of measuring that. But if you’re using teaching strategies that use dialogue and interactions with students rather than a “sage on the stage” methodology, I think that in some cases we make ourselves vulnerable and that does help teaching and learning, because it helps our students understand that we’re not you perfectly experts in everything… that we have to ask questions and investigate and learn things too… and that can be really valuable for students to see. But we also want to make sure that we don’t undermine our own authority in the classroom either. Do you have any strategies or ideas around around like that kind of in-class issue?

Kristina: Yeah, I think that the bias against women continues to exist just in a standard face-to-face class. One time, when I was teaching a game theory course, I was writing an equation on the board and it was the last three minutes of class and we’re trying to rush through you the first-order conditions and all sorts of things… and I had written the equation wrong, and as soon as my students left the classroom I looked at it and I went, “oh my gosh, I’ve written that incorrectly,” and so the next day when they came back to class, I I felt like I had two choices: we could either just move on and I could pretend like it never happened, or I could admit to them, that I taught this wrong… I wrote this wrong. So I did. I told them “Rip out the page from yesterday’s notes because that formula is wrong,” and I rewrote it on the board… and I got a specific comment in my evaluation, saying she doesn’t know what she’s talking about.. that she got that she got this thing wrong… and it was definitely something that, while I don’t have an experimental evidence that says that if a man does the same thing you won’t get penalized in the same way, to me it very much wrapped into that idea that women are are perceived as less qualified as men. So whether it’s because we’ll refer to as teachers or whether it’s because the student evaluations focused more on men’s competence, women are just seen as less likely to be qualified. How many times have you had a male TA and the students go up to the TA to ask questions about the course instead of you. So, I definitely think it’s difficult for women in the classroom to maintain that authority, while still acknowledging that they don’t know everything about everything No professor could. I mean we all think we do of course…. So, I think owning some of the fact that there are things you don’t know is important, no matter what your gender is, but I also try to prime my students I tell them about the research that I do. I tell them about the consistent studies in the literature that exists that shows that students are more likely to perceive and talk about women differently, because I hope that just making them aware that this is a potential issue, might adjust their thinking. So that if they start thinking “wow, my professor doesn’t know what she’s talking about” they might take a moment, and think “would I feel the same way if my professor were a man.”

Rebecca: I think that’s an interesting strategy. We found the similar kind of priming of students about evidence-based practices in the classroom works really well… and getting students to think differently about things that they might be resistant to… So, I could see how that that might work, but I wonder how often men do the same kind of priming on this particular topic.

Kristina: I don’t know. That would be an interesting next experiment to run if I were to do a treatment in two classes face-to-face classes and and you know do have a priming effect for a woman teaching a course versus a man and seeing if it had any kind of different effect. I think a lot of times men perhaps aren’t even aware that these issues exist. So, talking about the way that women experience teaching college in a different way… if men aren’t having this conversation in their classroom, it’s probably not because they’re thinking, “oh man, I really hope my female colleagues get bad evaluations so that they don’t get tenure.” It’s probably just because they aren’t really thinking about this as an issue… just because as a sort of white man in higher education you very much look like what professors have looked like for hundreds of years… and so it’s just a different experience, and perhaps something that men aren’t thinking about… and that’s why I’m getting the message out there so important because so many men want to help. They want to make things more equitable for women and I think when they’re made aware of it, and given some strategies to overcome it, they will. I’ve definitely found a lot of support in a lot of areas in my discipline.

John: …and things like your Chronicle article there’s a good place to start too… just making this more visible more frequently and making it harder for people to ignore.

Kristina: I agree. I think being able to speak out is really important, and I know sometimes women don’t want to speak out, either because they’re not in a position where they can or because they’re fearing backlash from speaking out. So, I think it’s on those of us who are in positions where we can speak up. I think it falls on us to try and say these things out loud, so that women who can’t… their voices are still heard.

John: Going back to the issue of creating teaching portfolios for faculty… that’s a good solution. Might it help if they can document the achievement of learning outcomes and so forth, so that that would free you from the potential of both student bias and perhaps peer bias. So that if you can show that your students are doing well compared to national norms or compared to others in the department, might that be a way of perhaps getting past some of these issues?

Kristina: I definitely think that’s a great place to start, especially in demonstrating what your strategies are to try and help your students achieve these learning outcomes. I always still worry about student level characteristics that are going to affect whether students can achieve learning outcomes or not. Students from disadvantaged backgrounds… students from underrepresented groups… students who don’t come to class or who don’t really care about being in class… these are all students who aren’t going to achieve the learning outcomes at the same rate as students who come to class… who are from privileged backgrounds… and so putting it on a professor alone to make sure students achieve those learning outcomes, still can suffer from some things that aren’t attributable to the professor’s behavior.

John: As long as that’s not correlated across sections, though, that should get swept out. As long as the classes are large enough to get reasonable power.

Kristina: Yeah, absolutely. I think it’s definitely it’s time for more evaluation into into how these measures are useful. I know there’s been a lot of articles in the New York Times op-ed, I think there was one in Inside Higher Ed, really questioning some of these assessment metrics. So, I think the time is now to really dig into these and figure out what they’re really measuring.

Rebecca: You’ve also been studying bias related to race and language, can you talk a little bit about this research?

Kristina: Yes, so this is a piggyback project after after I got finished with the gender bias paper, what I really wanted to do was get into race, gender, and accented English. Because I think not only women are suffering when we rely on student evaluations, it’s people of different racial and ethnic groups… it’s people whose English might be more difficult to understand. What we were able to do in this work is control for everything. So, we taught completely identical online courses the only difference we didn’t even I didn’t even allow the professors to interact with the students via email. I told them to make sure I… like Cyrano de Bergerac…writing all of their emails for them over a summer course and so they were handling the course level stuff just not the student facing things. They were teaching their online course but they weren’t directly interacting with the students in a way that wasn’t controlled… and the the faculty members recorded these welcome videos, which had their face… it had their English, whether it was accented or not… and I’m I asked some students who weren’t enrolled in the course to identify whether these faculty members were minorities and what their gender was. Because what’s important isn’t necessarily how the faculty member identifies – as a minority or not – as whether the students perceive them as minority… and even after controlling for all of that… controlling for everything… when everything was identical, I thought there was no way I was going to get any statistically significant results, and yet we did. So, we controlled even for the final grades in the course… even we controlled for how well students performed… the only significant predictor for those ordinal evaluation scores with whether the professor was a woman and whether the professor was a minority. We didn’t see accented English come up as significant, probably because it’s an online course. They’re just not listening to the faculty members more often than these introductory welcome videos. But we did when we asked students to identify the gender and the race of the professor’s based on a picture. We asked the student: “Do you think you would have a difficult time understanding this person’s English” and we found that Asian faculty members, without even hearing them speak, students very much thought that they would have difficulty understanding their English… and then we have a faculty member here who… blonde hair and blue eyes… but speaks with a very thick Hispanic accent, and the students who looked at his picture… none of them perceived that they would have a difficult time understanding his English. So, I think there’s a lot of biases on the part of students just based on what their professors look like and how they sound.

John: Can you think of any ways of redesigning course evaluations to get around this? Would it help if the evaluations were focused more on the specific activities that were done in class… in terms of providing frequent feedback… in terms of giving students multiple opportunities for expression? My guess is it prob ably wouldn’t make much of a difference.

Kristina: I think, as of now, the way our course evaluations here at Texas Tech University look is that they’re asked to rate their professors you know in a 1 to 5 on things like “did the professor provide adequate feedback?” and “was this course a valuable experience?” and” “was the professor effective?” and that gives an opportunity for a lot of: “I’m going to give five to this professor, but only fours to this professor” even when the behaviors in class might not have been dramatically different. Now this is also speculation, but maybe if there was more of a “yes/no,” “Did the professor provide feedback?” “Were there different kinds of assignment?” “Was class valuable?” Maybe that would be a way to get rid of those small nuances. Like I said, when we did our study, the difference was .4 out of a five-point scale, and so these differences aren’t maybe substantively hugely different. Maybe it’s a difference between you know a 4 and a 4.5. Substantively, that’s not very different. So, maybe if we offered students just a “yes/no,” “Were these basic expectations satisfied?” maybe that could help and that might be something that’s worth exploring. I definitely think that either removing the comment section altogether, or providing some very specific how-to guidelines on what kinds of comments should be provided. I think that that’s the way to address these open-ended say whatever you want… “are you mad? “…are you trying to ask your professor out? …trying to eliminate those comments would be the best way to make evaluations more useful.

John: You’re also working on a study of women in academic leadership. What are you finding?

Kristina: A very famous political science study, done by a woman named Jennifer Lawless, looked at the reasons why women choose not to run for office. So we know that women are underrepresented in elective office, you know the country’s over half women but, we’re definitely not seeing half of our legislative bodies filled with women. What the Lawless and Fox study finds, is not that women can’t win when they run, it’s just that women don’t perceive that they’re qualified to run at all. So, when you ask men, do you think you’re qualified to run for office, men are a lot more likely to say: “oh yeah, totally… I could I could be a Congressman,” whereas women, even with the same kind of qualifications, they’re less likely to perceive themselves as qualified. So, what my co-author Jared Perkins at Cal State Long Beach and I decided to do, is see whether this phenomenon is the same in higher education leadership positions. So one thing that’s often stated is that the best way to ensure that women are treated equally in higher education, is just to put more women in positions of leadership… that we can do all the Title 9 trainings in the world, but until more women are in positions of leadership, we’re not going to see real change…. and we wanted to find out why we haven’t seen that. So you know 56 percent of college students right now are women, but when we’re looking at R1 institutions only about 25% of those university presidents are women, and then the numbers can definitely get worse depending on what subset of universities you’re looking at. We did a very small pilot study of three different institutions across the country. We looked at an R1 and R2 and an R3 Carnegie classification institution. Our pilot study was small, but our initial findings seem to show that that women are not being encouraged to hold these offices at the same rate as men are. So what we saw was that… we asked men “have you ever held an administrative position at a university?” About 60% of the men reported that they had, and about 27% of women reported that they had, and we also asked “Did you ever apply for an administrative position? …and only 21% of the men said that they had applied for an administrative position, while 27% of women said they had applied. Off course it could be that they misunderstood the question… that maybe they thought we meant “Did you apply and not get it?” but we also think that there may be something to explore when it comes to when women apply for these positions they get them. There are qualified women ready to go and ready to apply, but men may be asked to take positions… encouraged to take positions… or appointed to positions where there might be opportunities to say: “There’s a qualified woman. Let’s ask her to serve in this position instead.”

John: That’s not an uncommon result. I know in studies and labor markets starting salaries are often comparable, but women are less likely to be promoted and some studies have suggested that one factor is that women are less likely to apply for higher level positions. Actually, there’s even more evidence that suggests that women are less likely to apply for promotions, higher pay, etc. and that may be at least a common factor that we’re seeing in lots of areas.

Kristina: Absolutely. I definitely think that University administrations need to place a priority on encouraging women to apply for grants, awards, positions, and leadership because there are plenty of qualified women out there, we just need to make sure that they’re actively being encouraged to take these roles.

Rebecca: Which leads us nicely to the motherhood penalty. I know you’re also doing some research in this area about being a mother and in academia, can you talk a little bit about how this impacts some of the other things that you’ve been looking at?

Kristina: Absolutely. The idea to study the motherhood penalty in academia stemmed from reading some of those “Rate My Professor” comments. Because at my institution, we didn’t have a maternity leave policy in place… so I came back to work after two weeks of having my child and I brought him to work. So my department was supportive. I just brought him into my office and worked with the baby for the whole semester… and it was difficult, it was definitely a challenge to try and do any kind of work while a baby is, in the sling, in front of your chest… but one of my “Rate My Professor” evaluations from the semester that I had my son, mentioned that I was on pregnancy leave the whole semester and I was no help. And so this offended me to my core, having been a woman who took two weeks of maternity leave before coming back to work… because I didn’t… I wasn’t on maternity leave the whole semester, and in addition… if I had been, what kind of reason is that to ding a professor on her evaluation? Like she birthed a human child and is having to take care of that child… that shouldn’t ever be something that comes up in a student comment about whether the professor was effective or not.

So what we want to look at are just the ways in which women are penalized when they have children. Even just anecdotally, and our data collection is very much in its initial stages on this project… but as we think through our anecdotal experiences, when department schedule meetings at 3:30 or 4:00 p.m., if women are acting as the primary caregiver for their children (which they often are) this disadvantages them because they’re not able to be there. You have to choose whether to meet your child at the bus stop or to go to this department meeting… or networking opportunities, are often difficult for women to attend if they’re responsible for childcare. Conferences have explored the idea of having childcare available for parents because, a lot of times, new mothers are just not able to attend these academic conferences… which are an important part of networking and most disciplines… because they can’t get childcare. So at the Southern Political Science Association meeting that I went to in January, a woman brought her baby and was on a panel with her baby. So, I think we’re making good strides in making sure mothers are included, but what we want to explore is whether student evaluations will reflect differences in whether they know that their professor is a mother or whether they don’t. So, how would students react if in one class I just said I was cancelling office hours without giving a reason and then in another class, I said it was because I had a sick child or I had to take my child to an event. That’s kind of where we’re going with this project and we really, really hope to dig into what’s the relationship between the motherhood penalty and student evaluation.

Rebecca: Given all of the research that you’re doing and the things that you’re looking at, how do we start to change the culture of institutions?

Kristina: Well, I’m thinking that we’re on the right direction. Like I said, I see a lot more opportunities at conferences for childcare and for women to just bring their children. I see a lot of men who are standing up and saying, “hey, I can help, I’m in a position of power and I can help with this” and what, you know, without our male allies helping us, I mean, men had to give women the right to vote, we didn’t just get that on our own. So, we really count on allies to put us forward for awards. One thing, I think, that’s an important distinction that I learned about from a keynote speaker is the difference between mentoring and sponsoring. So, mentoring is a great activity, we all need a mentor, someone we can go to for advice, someone we can ask for help, someone who can guide us through our professional lives. But what women really need is a sponsor, someone who will publicly advocate for a woman whether that’s putting her in front of the Dean and saying, “Look at the great work she’s doing” or whether it’s writing a letter of recommendation saying, “This woman needs to be considered for this promotion or for this grant.” Sponsorship, I think, is the next step in making sure that women are supported. A mentor might advise a woman on whether she should miss that meeting or that networking opportunity to be with her child. A sponsor would email and say, “we need to change the time because the women in our department can’t come. because they have events that they need to be with their children.”

John: A similar article appeared in a Chronicle post in late February or maybe the first week in March by Michelle Miller where she made a slightly different version. Mentoring is really good… and we need mentors, but she suggested that sometimes having fans would be helpful. People who would just help share information… so when you do something good… people who will post it on social networks and share it widely in addition to the usual mentoring role. So, having those types of connections can be helpful and certainly sponsors would be a good way of doing this.

Rebecca: I’ve been seeing the same kind of research and strategies being promoted in the tech industry, which I’m a part of as well. So, I think it’s a strategy that a lot of women are advocating for and their allies are advocating for it as well. So hopefully we’ll see more of that.

Kristina: I think the idea of fans and someone to just share your work is hugely important. I have to put in a plug for the amazing group: “Women Also Know Stuff.”

Rebecca: Awesome.

Kristina: It’s a political science specific website, but there are many offshoots in many different disciplines and really it’s just the chance that, if you say, “I need to figure out somebody who knows something about international trade wars.” Well, you can go to this website and find a woman who knows something about this, so that you’re not stuck with the same faces… the same male faces,,, that are telling you about current events. So “Women Also Know Stuff” is a great place. They share all kinds of research and they just provide a place that you can look for an expert in a field who is a woman. I promise they exist.

Rebecca: I’ve been using Twitter to do some of the same kind of collection. There might be topics that I teach that I’m not necessarily familiar with… scholars who are not white men… And so, put a plug out like, “hey, I need information on this particular subject. Who are the people you turn to who are not?”

John: You just did that not too long ago.

Rebecca: Yeah, and it, you know, I got a giant list and it was really helpful.

John: One thing that may help alleviate this a little bit is now we have so many better tools for virtual participation. So, if there are events in departments that have to be later, there’s no reason why someone couldn’t participate virtually from home while taking care of a child, whether it’s a male or female. Disproportionately, it tends to be females doing that but you could be sitting there with a child on your lap, participating in the meeting, turning a microphone on and off, depending on the noise level at home, and that should help… or at least potentially, it offers a capability of reducing this.

Rebecca: I know someone who did a workshop like that this winter.

John: Just this winter, Rebecca was doing some workshops where she had to be home with her daughter who wasn’t feeling well and she still came in, virtually, and gave the workshops and it worked really well.

Kristina: Yeah, I definitely think that that’s a great way to make sure that that everyone’s included, whether it’s because they’re mothers or fathers or just unavailable… and I think that’s where we look to sponsors… the department chairs… department leadership to say, “This is how we’re going to include this person in thid activity” rather than it being left up to the woman herself to try and find a way to be included. We need to look to put people in positions of leadership to actively find ways to include people regardless of their family status or their gender.

Rebecca: This has been a really great discussion, some really helpful resources and great information to share with our colleagues across all the places that…

John: …everywhere that people happen to listen… and you’re doing some fascinating research and I’m going to keep following it as these things come out.

Rebecca: …and, of course, we always end asking what are you gonna do next. You have so many things already on the agenda but what’s next?

Kristina: So next up on my list is an article that’s currently under review that looks at the “leaky pipeline.” So the leaky pipeline is a phenomenon in which women, like we were saying, start at the same position as men do, but then they fall out of the tenure track, they fall out of academia more generally… they end up with lower salaries and lower position. So, we’re looking at what factors, what administrative responsibilities, might lead women to fall off the tenure track. We already know that women do a lot more service work and a lot more committee work than men do, so we’re specifically looking at some other administrative responsibilities that we think might contribute to that leaky pipeline.

Rebecca: Sounds great. Keep everyone posted when that comes out and we’ll share it out when it’s available.

Kristina: Thanks.

John: …and we will share in the show notes links to papers that you published and working papers and anything else you’d like us to share related to this. Okay, well thank you.

Kristina: Thank you.
[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

14. Microcredentials

In this episode, we discuss the growing role of microcredentials in higher education with Jill Pippin (Dean of Extended Learning at SUNY-Oswego), Nan Travers (Director of the Center for Leadership in Credentialling Learning at Empire State College), and Ken Lindblom (Dean of the School of Professional Development at the State University of New York at Stony Brook). Jill, Nan, and Ken are members of a State University of New York task force on microcredentials.

Transcript

Rebecca: Our guests today are: Jill Pippin, the Dean of Extended Learning at SUNY-Oswego; Nan Travers, the Director of the Center for Leadership and Credentialing Learning at Empire State College; and Ken Lindblom, the Dean of the School of Professional Development at the State University of New York at Stony Brook.

John: Welcome, everyone!

Nan: Thank you. Hello.

Jill: Thank you.

Ken: It’s good to be here.

John: Our teas today are:

Rebecca: Jasmine green tea.

John: Jill?

Jill: I actually don’t drink tea.

John: Oh… here we go again…. Okay, Nan?

Nan: I’m drinking Celestial Seasons Bengal Spice.

John: …and, Ken?

Ken: My tea today is coffee.
[laughter]

John: …we get a lot of that…. Ok.
…and I have black raspberry green tea from Tea Republic.
So, today we’re going to be talking about microcredentials. Would someone like to tell us a little bit about what microcredentials are?

Ken: Sure, I’d be happy to tell you a bit about what microcredentials are. So there are traditional microcredentials that most people know all about, such as certificates, minors, or just either credit or non-credit certificates. So they’re pieces of larger degrees, but there are now new digital microcredentials that are having a bigger impact on the field, and that internet technology has allowed us to take more advantage of. So there are internet certificates and there are also digital badges, which are icons that can be put on a LinkedIn resume or shared through somebody’s website or on a Twitter feed… and they indicate that the earner of the microcredential has developed particular skills or abilities that will be useful in the workplace.

Nan: …and just to add to what Ken has said, with the open digital badges that are out there, they actually hold on to all of the information around the assessed learning…. the different competencies that an individual has, and the ways in which they’ve assessed it. So if they’re used, let’s say, in the workplace, an employer could actually click into the badge and be able to see exactly how the person has been assessed… which gives a lot of information that a traditional transcript does not give, because it does have that background information in there.

John: Who can issue microcredentials? or who does issue microcredentials?

Jill: …really industry, colleges, various and sundry types of organizations.

Ken: Yeah, in fact, Jill’s right. There’s no real regulation of microcredentials right now. So they can be given by any group that simply creates a microcredential and awards it to someone… and then they say what it is. So the microcredential’s value is really based on the reputation of the issuer.
Honestly, universities and colleges are pretty slow to get to this kind of technology, as we often are. So it’s new for us, but there are private companies that have been issuing them, and there been individual instructors at the college, and especially at the k-12 level, who have been using badge technology to motivate and to assess student work for quite a few years… but for the university level, this is exciting new territory that we’re really jumping into now.

Jill: Yeah, microcredentials are shorter… they’re more flexible…. and they’re very skill based… and so they’re new for colleges, I think in a lot of ways….. maybe not so much for our non-credit side of the house… those that have been doing training programs and things are very practical… skill-based pieces… but in terms of having ladders to credit and having credit courses seen through the lens of a smaller chunk of time, and of topic area, and focus… I think that’s the real change or the real difference in micro-credentialing than from a traditional environment…

Nan: …and what’s really important here is that the demand for these really, in many ways, is coming from industry where they really need better signals as to what people know and what they can do, and as Jill just mentioned, that they’re very skills based. This enables somebody to be able to get a good idea about what a potential employee is able to do. So the demand for microcredentials is really increasing, as industry are using them more and more and there’s many different groups that are really focused on using either the microcredentials, or specifically the badges (which is really a type of microcredential). There are some projects right now where there are whole cities that have come together and have been developing microcredentials and badging systems to make sure that all people in the community have the ability to show those skills as they go for employment. There are also some companies that are starting to come out. For example, there’s a company called “Degreed,” which is degreed.com. It’s a company that enables people to get their skills assessed and microcredentialed, and at the same time working with companies… there’s some big companies such as Bank of America… there’s many other ones that are on their website listed… and they work with the companies and identify the different skills that people need… and then credential the people who are trying to apply with those…. so that there’s a real matching. It becomes a competency-based employment matching system in many ways.

Ken: Some of the ways that badges have been useful are exactly what Nan and Jill are saying, that it’s come from the employers who are asking for specific information about what students will come to them with. We are also able to develop badges in concert with specific employers, if there’s particular training or education or sets of skills or abilities that they’d like their applicants to have… but there’s also another great advantage to microcredentials, particularly badges, that allow us to show the in-depth learning that goes on in classes. My other hat, other than Dean, is that I’m a Professor of English, and so in a lot of humanities courses the direct connection to skills isn’t as obvious to people as it is in an area say like teacher education. So what we can do with a badge is we can point out the specific skills that students are developing in a class on rhetorical theory, or on Shakespearean plays, or whatever. We can point out the analytical learning that they’re doing, the kind of critical thinking, the kind of communicative writing, so that those courses translate into the kind of skills that people are looking for… and of course, our students are picking those things up, but now we can make it more visible as a result of the technology of digital badges.

Jill: It’s an exciting time in higher education. I mean it really is, in terms of microcredentials, because higher ed has the opportunity to validate those credentials. A lot of them, as we said before, have been out there… non-credit skill-based smaller chunks of learning… but the idea of having them all kind of on the same playing field… and almost apples-to-apples in terms of validating learning outcomes… and making sure they’re part of a longer pathway toward higher education. It’s really exciting.

John: When someone sees a transcript and sees English 101 or English 373 or Eco 101, it doesn’t really tell the employer that much about what the students actually learned, but the microcredentials provide information about specific skills that would be relevant. Is there much evidence of the impact this has on employability or in terms of career placement?

Nan: There has been some work that is being done on that, and as I mentioned there are some companies that are even starting to get in the field because there is such a high demand for companies to be able to do competency-based hiring. There’s an initiative that the Lumina Foundation has been funding called Connecting Credentials and, in that initiative, they’ve been looking at microcredentials as a piece of that. That initiative has brought together many different businesses, organizations, and higher education together at the table to really discuss ways in which credentials can better serve all of those different sectors… and so some of the work that they have been working on and that can be viewed at connectingcredentials.org has really been looking at some of the impact of microcredentials on employability.

John: Based on that, I would think, that when colleges are coming up with microcredential programs, it might be useful to work with businesses and to get feedback from businesses on what types of skills they’re looking for… for guidance or some help in designing microcredential programs?

Jill: Absolutely.

Ken: Yeah. I can talk a little bit about some experience we’ve had at Stony Brook on that. We’ve been working with an organization called FREE which is Family Residences and Essential Enterprises. They’re a large agency that supports students, children, and adults with disabilities… and we worked with them to create several badges that align directly with their national standards and the certification needs of their employees. So now we’ve got a system where one of the things that their employees need is food literacy. If they’re running a house for people with disabilities, people who need assistance, they have to be able to demonstrate that they’re able to produce healthy nutritious meals… and so once they’ve gone through this training, which is specifically aligned with their curriculum, having earned the badge will demonstrate that the employee has developed that set of skills. We’ve also got one for them on leadership among their managers and we’re developing more… and the fact that we’ve developed that with the employer… and now the employer is actually contracting with us to deliver that instruction to their employees. We’ve done really well and we’ve issued well over a hundred badges to that agency in just about a year.

John: Excellent.

Nan: There’s also, as we think about it from an employability perspective… there is also another important area that’s happening with the microcredentials and the badges in higher education…is to really be looking at some of those more liberal arts kinds of skills: being able to be a good communicator… to have good resiliency… these are also very important pieces that go into being a good worker… and so there are many institutions as we look across the United States that are really looking at some of these broader skills. There’s also some work that’s being done on the student services side which is really looking at how students have been engaging and being involved within the institution. So, there are these other pieces that also help to build that whole person… how somebody really is involved in higher education… what they know… what they can do… and the kinds of different volunteer pieces… as well as the different kinds of things that they have engaged while they are they are there: working in teams, doing different projects. So, there’s lots of different ways of using those badges. There are also some institutions who are using these badges as a beginning point for students. For some people, it’s scary to start at higher ed again, and to be able to take a little bit of a program that’s a smaller program that actually has a credential at the end of it, is a really motivating thing. Students come away saying: “Well I did that. I can do more…” and so it becomes a really good recruitment tool… but it also is a real good student support tool in order to help people start the path of education as well.

Ken: …and you know, Nan, that’s an important point too… and it works the other way for people who are in, let’s say a master’s degree program…. they don’t not learn anything new until the very end when they’re issued the degree… they’re actually building skills and developing abilities all along the way. So, what the digital badge or a microcredential can do is make visible the learning that they’re doing along the way. So after three or four courses, they’ve earned a credential that demonstrates that value. So they don’t have to wait until they finish 10 or 11 courses.

John: So, it lets them have small goals along the way, and they’re able to achieve success, and perhaps help build a growth mindset for those students who might not have done that otherwise.

Ken: Yes.

Nan: Yes.

Ken: Well put, John.

John: How does this integrate with traditional courses? Are there badges that are offered… or a given badge might be offered by multiple courses? or do individual courses offer multiple badges or microcredentials?

Ken: It can go in lots of different ways. There are instructors who build badging into their own classes. Those aren’t really microcredentials the way we’re talking about them. We’re talking about microcredentials that are somewhere between a course and a degree. So, at Stony Brook, for example, we have what we call a university badge program, and in order for a University badge to exist, it must require between 2 and 4 4-credit courses. So a total of 6 to 12 credits, that’s the point at which students can earn a university badge at Stony Brook University. Those courses work together. So, for example, we have a badge in design thinking, and in order to earn that badge students must get at least a “B” on two courses that we have on design thinking. We also have a badge in employer-employee relations within our Human Resources program… and in order to earn that badge, there are three specific classes that students have to take and earn at least a B on each of those classes.

Nan: So, there is also another approach in terms of thinking about how the microcredentials can intersect and kind of interface with the traditional credentials, the traditional degrees, and that’s through different forms of prior learning assessment. So, what we also see is that students come with licenses, certifications, different kinds of these smaller credentials that represent verifiable college-level learning… and through either an individualized portfolio assessment process or, at our institution at SUNY Empire State College, we also have a process called professional learning evaluations… where we go in and evaluate training, licenses, certifications, and those are evaluated for college credit . Those are then also integrated within the curriculum, and treated as… really transfer credit… they’re advanced standing credit. So, students also have the ability to bring knowledge with them through the microcredentials… they’ve been verified by another organization, and then we re-verify that learning at a college level to make sure that it is valid learning for a degree… and then integrate it within the curriculum.

John: In Ken’s case, it sounds like the microcredential is more than a course, in other cases it might be roughly equivalent to a course… or might it sometimes be less than a course? Where a course might provide individuals with specific skills, some which they might have in other courses? or is that less common?

Jill: You’re right, there’s a spectrum. So, for instance if you look at it from a traditional standpoint, a technology course might already have an embedded microcredential in the form of OSHA training, for example. That’s a microcredential, in that particular example, and so we have the opportunity to look at the skill based smaller chunks that may be very specific to an occupation or employers need for someone to have those skills and be able to put some framework around it so that it can be understood and communicated to an employer.

Ken: One of the exciting things about badging and microcredentials right now which Jill alluded to earlier is that there really isn’t any regulation regarding them yet. So when you say a college degree, that has a standardized meaning but when you say a microcredential or a digital badge, there’s no standardized meaning whatsoever, so what we’re doing is we’re creating different versions of microcredentials and the meaning of them is dependent on that specific situation. So one of the things that’s exciting about being a University in a College is we can really bring academic rigor to these no matter how many skills and what level of learning of the digital badge represents… you know because it comes from a university particularly a SUNY it’s going to be a high quality badge. But it’s incumbent upon the one who’s reading the badge to understand what that badge actually means, and depending where it comes from, depending on the size of the badge, and what the number of skills and abilities aligned to it are, the badge means different things and that’s why it’s so important that the badge includes the metadata – all that in depth and formation that you get when you click on the digital badge icon and all of that information pops up.

Nan: In addition, nationally the IMS global learning community has been developing standards and hopefully there’ll be national standards around the data, how that’s reported, and being able to allow people to really understand and compare the attributes of the criteria of how it’s been assessed, and so there’s a great deal of work that’s being done at a national level to really be thinking about how we can have some good standardization and guidelines around what we mean by certain things in the digital badging. So I think that’s something to pay attention to in terms of what’s coming about.

Ken: Yes it’s exciting space before the standardization has been done, because there’s a lot of innovative potential there, but as we standardize there’ll be more comparability and that’ll be easier to do. So, we may lose some of that innovation later but we’ll just have to see. It’s very interesting to be at the beginning of this process like this because degrees were really kind of finalized at the end of the 19th century, and now at the beginning of the 21st century we’re reinventing that kind of work.

John: Now earlier, it was suggested that other groups have been creating micro-credentials in industry and private firms. One of the advantages, I would think, perhaps that colleges and universities would have is a reputation for certifying skills. Does a reputation of colleges perhaps in universities give us a bit of an edge in creating microcredentials compared to industry?
JILL : One would hope, however there are examples of all sorts of industry entities out there that are offering microcredentials – think of the coding academies that are prolific and they’re very skill based, very specific to an industry, in the industries needs the employers understand what that outcome is from that training and they’re able to therefore value it, and the employee is able to communicate it very effectively. But where I think the colleges have an opportunity and universities have an opportunity to really shine here is that this is where we have the experts, we have people who are very well-versed and researched in their area of scholarship, and they’re able to really look at curriculum and validate it, and make sure that it is expressed in terms of college-level learning outcomes.

Nan: In addition, I think that higher ed has the opportunity to really integrate the industry certifications with curriculum and the stacking process bringing in those microcredentials from industry or having them right within the higher ed curriculum and then being able to roll that in and build it into the curriculum, so that a degree, I can imagine, as we evolve higher education over the next decade or so, that people as they graduate… they’re graduating with a college degree, they’re graduating also with microcredentials, and together they’re able to really indicate what a student knows and what a student can do which really can help the student a great deal more than when it’s just a degree that doesn’t really spell out what some of the details about what somebody knows.

Rebecca: I’m curious whether or not there’s any conversations happening with accreditation organizations about micro credentialing and how they might be involved in the conversation.

Nan: So at this point there are conversations that are happening at the accreditation level and for example, every regional accreditation agency has policy around the assessment of learning. Sometimes specifically around prior learning assessment, sometimes around transfer credit, which within those policies they’re really starting to look at how those learning pieces can come in. When it’s on the for-credit side, then there really needs to be a demonstration by the institution that those microcredentials are meeting the same academic standards as the courses are also. So using the accreditation standards and making sure that all policies and procedures are of the same quality and integrity ensures that it all fits together.

Ken: I think it’s not only an opportunity for universities that we’re developing micro-credentials, but I think it’s our responsibility to do so, because the idea of digital badges for example was popularized in the corporate sector before universities got on board and they ran the gamut in terms of quality and value and frankly there are some predatory institutions that award badges that may not have much value at all to students, and yet they can be quite costly. So I think it was very incumbent upon the university to create valuable microcredentials that would have real academic rigor and support behind them. In addition to that, some of these institutions were also using their badge programs to undercut the value of the degree and say “Well, you don’t actually need a college degree with all that fluff, you just need to get the skills training that you’ll get from a badge.” And we know that a college degree delivers far more than just a set of discrete skills, it gives better ways of seeing the fuller world, of understanding the integration of knowledge, of being able to employ social skills along with technical ability, and digital badges at the university level allow us to make those connections more visible. But it also can help us prevent attacks against the university, which are done purely from profiteering perspective sometimes.

Jill: We can provide some validity and some academic integrity to the smaller microcredential world, then I think higher ed as Ken says has a responsibility to do so.

Nan: It also shows a shift in some of the role of higher education where it becomes even more important that we take the lead in helping to integrate people’s skills and their knowledge and then how that relates to work and life. In many ways, the older higher ed… we had a much more of a role of just delivering information and making sure people had information. Now I think our role has really shifted, where we need to take the leadership in the integration of knowledge and learning.

Rebecca: I’m hearing a lot of conversation focusing on skills and lower levels of the Bloom’s taxonomy, so it would be interesting to hear of examples at higher levels of thinking and working.

Ken: Well, Bloom’s taxonomy actually is a taxonomy of skills and domains of knowledge and abilities so that there are certainly skills involved with synthesis and evaluation, which are at the top of Bloom’s taxonomy. So digital badges can work with that. Digital badges… the skills can involve being able to examine a great deal of knowledge and solve specific problems in an industry, and these are the highest levels of application of knowledge and learning.

Nan: In higher ed they’re also being looked at both at the undergraduate and graduate level, and so it’s not just that entry-level piece. Again, we keep talking about licenses and certifications as a type of microcredential, and there are many out there that you cannot acquire until you have reached certain levels of knowledge and abilities. I know we have focused a great deal of this conversation in terms of being skills-based, but in industry they’re really talking about it more as competencies, and the definition of competencies is what you know and what you can do, so it’s both knowledge and skill space, it is not just skill space.

Ken: In fact, one of the issues that some faculty have with microcredentials, particularly digital badges, is that they have a sense that it’s focused too heavily on utilitarian skill, and not focused heavily enough on the larger and the higher levels of learning that Rebecca is talking about. So I think Nan’s bringing in the idea of competency-based learning is really very helpful that way.

John: So, basically those skills could be at any level.
What are some of the other concerns that faculty might have that might lead to some resistance to adopting microcredentials at a given institution?

Nan: So one of the areas that they may talk about is the concern of the integrity. The academic integrity of the microcredential, or of the badge. And what’s important is that each institution really look at their own process for reviewing microcredentials and improving them, especially if they are on the credit side and they’re going to be integrated within the curriculum. So they need to follow the same standards that any course will follow, and that should really help relieve that concern about academic integrity.

Ken: Yeah, in fact the SUNY microcredentials group, which all of us on this podcast are involved with, specifically points out that faculty governance has to be heavily involved in the creation of any digital badge or micro credential program. That’s the whole point of bringing the university level to this. Is that faculty governance that academic input is going to be behind every microcredential that we create. One of the other things that my faculty colleagues have had trouble with, is the very name of digital badges, and they think it sounds a little silly, a little juvenile. They always say, “oh, well, this is just Boy Scouts and Girl Scouts” and so to them it can feel a little silly. It actually doesn’t come from Boy Scouts and Girl Scouts. Digital badges come from gamification and motivational psychologists looked at why people were willing to do so many rote tasks in an online game. Even though they weren’t being paid to do so, and didn’t seem very exciting on its own and what they found is that people were willing to do that because they would earn a badge or they would level up or earn special privileges along the way, and that was very motivating for people. That’s where this technology really came from and then we built more academic rigor into it. The metaphor that I like to use with my faculty colleagues, which was suggested to me by one of my English department colleagues, Peter Manning. He pointed out that in the medieval period in England archers would learn different skills and when they developed a new skill, they would be given a feather of a different color, and then that feather would be put in the cap. So literally a badge is like a feather in the cap, and when you see somebody coming with 8 or 10 feathers of different colors, this is going to be a formidable adversary. Just like people with a few did badges from the SUNY system, they’re gonna be formidable employees.

Jill: The other thing I like to jump in and say too is – the Girl Scout in the Boy Scout badging system if you really know what the badges represent – you know that there are very strident rules learning outcomes and so on involved in attaining the badge. The badge is a way of just demarcing that they attained it. The quality is inherent in the group that’s setting up the equation by which you earn the badge.

John: So it’s still certifying skill.

Jill: It’s still certifying something and again the institution has the ability to determine what that something is, and to make sure that it is of quality.

John: Now one other thing I was thinking is that if an institution instituted a badging system, it might actually force faculty to reflect a little bit on what types of skills they’re teaching in the class, and that could be an interesting part of a curriculum redesign process in a department, because we haven’t always used backwards design where we thought about our learning objectives. Quite often faculty will say, I’d like to teach a course in this because it’s really interesting to me, but perhaps more focus on skills development in our regular curriculum would be a useful process in general.

Jill: I agree.

Ken: I think that’s a great idea, John. We haven’t used the badging system in my school that way yet, but I think it’s a great idea and honestly there are faculty who bristle at the notion that their teaching skills, and digital badging really strikes at the heart of that, in my perspective, elitist attitude about education. We do want to open up students Minds, we do want to expose them to more of the aesthetic pleasures of life, but we also want to help students improve their own lives in material ways as well, and badging can help us make visible, and strengthen the ways in which we do that in higher education. I think we should be very proud of that.

Nan: So again one of the reasons I like to use the word competency, is because it brings the knowledge and skills together, and we’re actually talking about skills as though they are isolated away from the knowledge pieces, and you can’t have skill without knowledge. To develop good knowledge, you need certain skills, and so I think it’s important to really think about this not as two different things that are separated and somehow we all of a sudden are going to be just skills based, but much rather that we’re developing people’s competencies to be highly educated people.

Jill: Very symbiotic really, and I think this is also where you get at the idea of how can non-credit and credit work together. If you’re thinking about them, in terms of the outcomes and developing your class in that way, and if one of those by itself would be something that’s non-credit, and then if you build them all together then you get a course. Or then your couple of graduate courses together, then you get a credential that is something on the way to a graduate degree.

John: This brings us to the concept of stackable credentials or some microcredentials designed to be stackable to build towards higher level credentials.

Ken: Really a micro-credentialing systems, should always be stackable. That’s one of the bedrocks of the whole idea of it. So it’s not required that a student go beyond one microcredential, but microcredentials should always be applicable to some larger credential of some sort. So, for example, all of the university badges at Stony Brook University stacked toward a master’s degree. And in fact we’ve tried to create what’s called a constellation of badges, so that students can wind their way to a master’s degree by using badges… or on their way to a master’s degree they can pick particular badges to help highlight specialties among electives that they can choose. So it’s a way for them to say, yes I have a Master of Arts in Liberal Studies, and as part of that I have a particular specialization in financial literacy, or in teacher leadership, or an area such as that. But yeah, microcredentials should always be able to stack to something larger. And if we do it right, eventually we’ll have a system that works really from the first… from high school to really into retirement, because there can be lifelong learning. That’s involved in microcredentials as well. There’s always more to learn, so there should always be new microcredentials to earn.

Nan: I totally agree with Ken and if we provide different microcredentials and don’t provide how they do stack and build a pathway, then we really have not helped our students. In many ways we have left it, traditionally, historically, left it up to the individual to figure out how their bits and pieces of learning all fit together and we kind of expect that they’ve got the ability to kind of put it all together and apply it in many different ways, and I think that the role that microcredentials is really playing here, is a way of helping us start to talk about these discrete pieces, and then also how they build together and stack, which gives the person the ability to think about how it fits into the whole. I think what microcredentials is doing is opening up higher education, in a way to really be thinking about how to better serve our students, and give them those abilities to take what they know, package it in different ways, be able to apply it in many different ways, and be able to build that lifelong goals, and seeing how it all fits together.

Rebecca: Just thought I’d follow up a little bit. I think a lot of examples that we see are often in tech or in business and those are the ones that seem very concrete to many of us, but for those of you that have instituted some of these microcredentials already, how does it fit into a liberal arts context, which might not be so obvious to some folks?

Nan: There’s actually quite a few examples of microcredentials and badges that are more on the liberal arts side. There’s been some initiatives across the United States where different institutions have been developing, what we can think of as the 21st century skills: communication, problem-solving, applying learning, being resilient. These are some of the kinds of badges that are starting to really evolve out of higher education, which really brings in those different pieces of a liberal arts education, and being able to lift that up and give the students the ability to say, “I’ve got some good problem solving skills and here’s some examples and I can show it through this badge.” When we look at the research in terms of what employers need for the 21st century employee, we’re really looking at very strong liberal arts education that is then integrated into a workplace situation. So I’m seeing a lot more badges being grown in that liberal arts arena.

Ken: Yeah, at Stony Brook University, we have a number of badges that are in the liberal arts. So for example, we have a badge in diverse literatures. So there may be people who wish to earn that just for personal enrichment, but it’s something that might be really interesting to English teachers as well, because by earning a badge in diverse literatures, which requires a minimum of three classes in different areas, different nationalities of literature, teachers will be able to go on to select pieces of literature more appropriate for diverse audiences. They’ll be able to explore greater world literatures because of the background that they’ve had in exploring different literatures in their classes. So, that’s just one example, but of our about 30 badges, about third of them are in those humanities areas. That said, I will acknowledge that they are not anywhere near as popular as the more business oriented and professional oriented badges, where the link to skills simply seems more obvious. So I think that the liberal studie… the liberal arts… the humanities badges.. the connection is not quite as clear and so there’s still a lot of potential there.

Jill: It’s so important for the employers and for the students themselves, but I think almost most importantly the employers to understand what that means. They have to understand you have a microcredential or a badge and problem solving. They have to have some kind of trust, that it’s truly a skill that equates to their workplace situation, and that’s where the online systems where you can actually delve into what’s behind the my credential, is so important. You can really sit there and look at it, and verify that what the competencies and the skills that the individual has attained through earning this badge.

John: So the definition in the metadata is really important and establishing exactly what sort. Now that brings us to another question. At this point each institution that’s using badges is developing its own set of badges and competencies. Has there been any effort at trying to get some standardization and portability of this across institutions or is it too early for that, or do you see it going in that direction at some point?

Ken: John, it certainly hasn’t happened yet, but I do know that the SUNY Board of Trustees at their last meeting started to consider developing working groups to do just what you’re saying. So it’s not so much to standardize what badges are, but rather to standardize reporting and explore ways to help badge earners to explain and demonstrate their badges to employers, and to other schools more easily. So I know that’s where the SUNY system is headed.

Nan: And if it is for credit, then it falls within transfer credit anyway. So really, if it has gone through the appropriate academic curriculum development processes, the governance processes, then it has the same rigor and therefore is very transferable through our policies on transfer. So really what we need to be doing is doing some good work around the non-credit side,…that really helps the transfer of non-credit learning.

Jill: And one way we can do that is by reinvigorating and breathing new life into a 1973 policy that SUNY has on the books for the awarding of CEUs )continuing education units). It has a recommendation in a process by which campuses can take non-credit curriculum and send it up through a faculty expert and it has a certain guideline about how do you come up with an approval process and how many CEUs could be granted for such work. So, there are some skeleton pieces to how SUNY may codify that moving forward, at this point there is not a rule about how to move forward with non-credit. In fact, SUNY I think trying to be responsive to the emergent nature of this very concept, it has not tried to come in and be too prescriptive yet.

John: On the other hand, when students do receive microcredentials at multiple institutions. Let’s say they start at a community college. They move perhaps to Empire State, maybe they move to a four-year college for university, if they don’t finish and get a degree, they still would have some microcredentials that they could use when they go on the market, because many of them perhaps might use Credly or some other system where they can put it on the LinkedIn profile and they still have that certification, which if they just don’t get the degree it just shows them as not being a degree recipient, which actually seems to hurt people in the job market, but perhaps if they could establish that they have been acquiring skills a long way, maybe it might be helpful for students.

Nan: John, that is a really good point. In many ways, our degrees set up a system where if anyone who steps out of a degree has nothing to show for it and therefore is at a disadvantage, and the microcredentials can help demonstrate their progress, and the competencies that they already have, and so it can play a very important role in people’s lives, when students do need to step in and out of higher education.

John: So where do you see microcredentials going in the future? How do you see this evolving?

Ken: It’s in such an amorphous space right now, it’s hard to imagine what it’s going to undulate into. A big part of what’s happening now is what Nan has talked about. An attempt to try to put some boundaries on this and bring some common definitions to bear on the technology and and the idea of a microcredential, but I think it’s going to still expand. What it’ll do is it’s going to increase partnerships among interesting groups. I think in a lot of these, the universities will be at the center of the partnership, but we’ll be bringing in many more student groups, industry partners, government groups, nonprofits. I think it’s going to increase the amount of communication dramatically, and that’s very exciting because for too many years universities have fulfilled that stereotype of the ivory tower, and this is really breaking that down in some very productive ways.

Nan: And when we look at it from a national perspective, and looking at it to see where some of the direction is going with groups such as IMS global, with connecting credentials and other groups, but what we’re really seeing is the prediction that every student would have a comprehensive digital student record, that they would take with them. It becomes a digital portfolio and the badges would be in their microcredentials, any degrees, they’d have an ability to be able to transport themselves in many different directions, because all of that information would be there, and that digital student record would allow anybody to click in and see the metadata behind it, to know what those competencies that people have, and how it was assessed, what it really means so that there’s a real description of that. That would also enable students to have, again the prediction is that students would be able to transfer from institution to institution. They’ll be able to stack up and build their degrees in ways that would really support the student in their whole life pathway. Ken has just mentioned about partnerships. I think that what we would see is a great deal of partnerships across institutions and with institutions in industry, that really start to build these pathways that people can move along with their comprehensive digital student record.

Ken: Nan, can I ask you a question?

Nan: Yes.

Ken: So a few years ago, there was a lot of talk about they termed co-curricular transcripts, which would be the kind of transcript that would include club membership, informal learning, not credited learning, but it sounds like we may be getting beyond that in a really positive way, and that just the idea of a transcript is becoming a little transformed, so that those other kinds of learning will actually be transcripted in the same digital format. Am I reading that right? Do you think that’s where we’re going?

Nan: Yes, I do think that’s where we’re going, Ken. We’re right at the end of a multi-year, multi-institutional project that Lumina funded, looking at these comprehensive digital student records, that go way beyond… also capturing things like clubs and other kinds of things that students engage, but really, they’re competency-based they start to record those competencies, the data behind the competencies, and when students are in a club or when they’re doing other kinds of activities, the kinds of competencies that they’re gaining from those pieces are also being recorded. So it’s not just: “You are a member of a club, what did you really learn and what can you do because of that?” and so I think that we’re gonna see that evolving more and more over the next decade or so.

Ken: That’s great, thank you.

Jill: If I could add to the question about the role of microcredentials evolving. One of the things that I think is going to be happening, and part of why I’m so excited about microcredentials is, I see this as having a nice connection for the non-credit side of the house of colleges and universities to the credit side, because for so many years, non-credit has been connecting with, and trying to serve business and industry, in ways that really have been limited, and so this really opens up the ability to connect and collaborate with credit expertise within the institution, to be able to create those true pathways, from start to finish from the smallest first step along that pathway all the way through, and that’s really exciting, and I think… and I hope… that’s part of this overall discussion we’re having about micro-credentials moving forward. In a lot of ways this is cyclically. We talked about the CEU policy of 1973. There has been this two sides of the house as they say, as I said a number times today, and really we’re all about education and trying to help people to learn things and be able to apply them to their jobs and their lives and having that connection be that much more seamless and clear. I think that’s one of the most exciting things, from my seat at the table.

John: Well, thank you all for joining us.

Nan: Thank you

John: Look forward to hearing more.

Jill: Thanks for having us.

Ken: Pleasure to be here.

Nan: Take care everybody, bye bye.

Jill: Bye, guys.

Rebecca: Thank you.