75. Concourse Syllabus Platform

Syllabi are important resources for students, faculty and institutions. Syllabi that are readily available, consistent, accessible, and up to date can provide important scaffolding for students. In this episode, Jeffrey Riman joins us to discuss a tool that can help both faculty and institutions accomplish all of those things while keeping faculty focused on learning outcomes and course design.

Jeffrey is a coordinator of the Center for Excellence in Teaching at the Fashion Institute of Technology. He’s also a consultant and educator at Parsons The New School University. Jeffrey is a chair of the State University of New York faculty Advisory Council on teaching and technology at FIT, the Fashion Institute of Technology. He is also the chair of their Faculty Senate Committee on instructional technology.

Show Notes

Transcript

John: Syllabi are important resources for students, faculty and institutions. Syllabi that are readily available, consistent, accessible, and up to date can provide important scaffolding for students. In this episode, we’ll talk about a tool that can help both faculty and institutions accomplish all of those things while keeping faculty focused on learning outcomes and course design.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

John: Today our guest is Jeffrey Riman. Jeffrey is a coordinator of the Center for Excellence in Teaching at the Fashion Institute of Technology. He’s also a consultant and educator at Parsons The New School University. Jeffrey is a chair of the State University of New York faculty Advisory Council on teaching and technology at FIT, the Fashion Institute of Technology. He is also the chair of their Faculty Senate Committee on instructional technology. Welcome back, Jeffrey.

Jeffrey: Nice to be back.

Rebecca: Glad to have you.

Jeffrey: [LAUGHTER] You make me sound so busy that I think I have to cancel, though. So we’ll do this another time. [LAUGHTER]

Rebecca: I think you are just as busy as it sounds from what I know.

Jeffrey: And I’m very happy to set the time aside to be with you guys. This is a great topic, so thank you.

Rebecca: Today’s teas are…

Jeffrey: Russian Caravan.

John: Yorkshire Gold.

Rebecca:…and Jasmine Green tea.

John: We’ve invited you here to talk about the use of the Concourse syllabus management platform that’s been adopted at FIT. Could you tell us a little bit about this service?

Jeffrey: Yes, thanks. The product itself—and I like to preface whenever I talk about this product or VoiceThread for that matter—is we’re always looking to have choices in the products that we use and it’s really always a needs-based issue. What do you want to do and how are you going to get it done? In the case of Concourse, which is a product made by a company called Intellidemia, we knew that we had some challenges with the use of syllabi at FIT and we wanted to find a way to manage them. In most cases, what we found were products that were distributing PDF files or text files, but they weren’t truly interactive living syllabi, and let me explain what that means. First of all, like many colleges we have a very high percentage of part- time faculty, which means that many of them only are in the college to teach their courses, which means that they are not necessarily exposed to the curricular process or all of the syllabi that represent the curriculum they’re teaching. These syllabi tend to be handed down from semester to semester, in some cases, generation to generation. And, as a result, we were finding that many syllabi either had no learning outcomes or the course description was out of date. We’re all teaching in a technology driven world and a lot of what we teach at FIT is really based on practice, so products change, textbooks change very frequently. This is not unusual for any college but at FIT, because we have about 70% part timers, it’s a bigger challenge. So in doing a search we found Concourse and what Concourse allows us to do are a few things. First of all, it allows us to synchronize how all syllabi are formatted so students going to each class where Concourse syllabi appears are seeing a very similar appearance and the sequence is very similar. Now I want to stress that it is editable, it allows for full academic freedom, except in certain areas where the college—and I’m talking about the Faculty Senate Curriculum Committee—and the faculty as a whole feel certain things should not be edited. For example, the course description and the learning outcomes that the course is predicated upon. Those should be uniform, so if all three of us are taking the same course from different instructors, we should not have different outcomes. And many faculty were taking it upon themselves—with all sincerity—to amend the outcomes to better fit their practice or to better fit the way they think the course should be taught. And this, by the way, was not just the part timers, the full timers too. In one department, I was pelted with tomatoes when they found out that they were no longer able to edit the outcomes. [LAUGHTER] However, we need to put the students front and center in this situation. They sign up for a course and their friends are taking the same course, they should have certain unifying elements. So the synchronization of format, the format is fully digital but you can make a very nice looking PDF. It is completely compliant for screen readers. Font size, color contrast, and they have a VPAT that is easy to access that you can see basically their compliance levels which have improved and when you look at it it’s not like a beautiful piece of graphic design but it has a pleasing appearance. So many syllabi are in Times Roman and people are editing Word documents when they don’t know how to manage the formatting. The formatted syllabi allow you to input all the things you’re allowed to input and let me give you examples of what faculty will input. They’ll input their own unique course policies. They’ll input their absence policy. At FIT we do not regulate the grading scale from course to course, so there are some people who have an A that starts at 94 and others have an A that starts at 95. Those things are permitted. In addition, you can put down the materials needed for the course, the office hours, and really your whole calendar of how you have the course unfold. So if all three of us using this scenario are taking the same course in different sections, the syllabi can be quite different with respect to how each professor teaches to their strengths , but that does not mean that the pillars that support that course and the way assessment is achieved are unified. So how do we do this? First of all, like many colleges in the SUNY network I can speak to they use Banner. And Banner feeds will synchronize through the Concourse product. So we actually input all of our learning outcomes and all of our course descriptions into feeds that are updated on a daily basis. So I’ll explain how the feeds work a little bit more in a minute. But what that means though is everybody in each course is getting the same thing. Now that is on a course level. When you look at an institution-wide level, there were many policies and services that are available to everybody in the college, and we found that many syllabi either were out of date or did not even have them. So here’s examples of things that will go into what we call school policies and resources. Where is the center that helps students with special needs? Where is the learning center? The tutoring center? Advisement? Counseling? Where is the policy on academic integrity? All of these things are put into a separate feed that’s updated each semester so that they represent the current state of affairs. We’re never more than 14 weeks out of date.

Rebecca: From a student point of view, I can imagine how useful it would be to have this consistency that you’re describing from class to class. So that you know that the information that you get as accurate but also where in the document it is. That the heading structures and things look familiar so it’s easy to skim and look at it. I don’t think faculty always stop to think about that a student might have five classes and for each class, it’s almost like having five different employers where each place has its own set of policies and like, “This information is located here, and that information is located there.” So I can imagine for consistency purposes how useful that could actually be for a student.

Jeffrey: It does help the students and it helps the students and the faculty. And I’ll give you two examples. If there is a concern about plagiarism, both the professor and the student can go to the syllabi and click on the latest policy and procedure for dealing with plagiarism. Because we have different schools of thought on plagiarism, right? Some people think they should be tarred, feathered, and you know, left outside on a cold day. And other people treat it more like it’s a learning process. The school has very, very clear prescriptives on how to deal with plagiarism and when everybody’s using the same tool, it means that the students are informed of their rights and their ability to defend themselves and to deal with the issue. And faculty are prescribed a step-by-step process as well. That’s a great unification of process. It also is reassuring to a student who doesn’t have to go looking for that information. Another example is the academic integrity policies in general are very prescriptive to the students and the faculty alike in terms of best practices when it comes to the proper attribution of content and also the rights of use. And actually we have a significant issue with visual plagiarism.

Rebecca: Yup.

JEFFERY: … I knew you’d say “yup” to that one. [LAUGHTER] And so, this works. So without beating it to death too much in simple terms, so what we do is if you visualize your college, your college usually has several schools and within those schools are departments and within those departments are the courses. Taken as a whole—for instance, FIT runs around 2100 sections per semester—we have five schools and many departments within. The way we use it is from the highest level, course policies and resources are shared throughout the school. Course descriptions, learning outcomes are curriculum specific. And then the rest of it is left up to the faculty. The first time they create a syllabus, it could take them a good hour and a half because you need to input each thing individually and on a week-by-week basis. Building your calendar the very first time will take more time. But then once you’ve done it, it is easily transportable from section to section semester to semester. So the import process not unlike importing a Blackboard course—well, Blackboard you push from the old to the new and with Concourse you pull—so you go into the new course. So let me talk a little bit about the integration. Not that anybody’s stopping me here. [LAUGHTER] We are a Blackboard school, at least at the present time. We are fully integrated with Blackboard. We use a product called API Adapter which helps to manage the connection between Concourse and our Banner system. And it’s very simple, it’s just like middleware, it’s open-source, it’s easily used. On a larger scale there might be a fee but for our size college it’s not really a big deal. So, as the courses are run in Blackboard and we do run three times a day early in the semester, there were several things that happened in every school that’s using an LMS. The teacher assignments are updated, the course sections are updated, and course shells are generated. And Concourse will run right behind those runs so that if John is teaching a course and he has three sections and one section was cancelled, that update will be evident in Blackboard and you will not be able to create a syllabus in the course that was cancelled. Or conversely if a time has changed, that’s updated too. All of this goes through Banner. So if you just basically think of it as a one-two kick, we do our feeds for courses through Blackboard and then syllabi into Blackboard. Every single course that is a credit- bearing course has a syllabus template associated with it. Now let’s just say it’s an old course and it doesn’t have outcomes, Concourse understands when there’s something missing and will allow you to edit it. So if for some reason you got a course that didn’t have outcomes at all, and in the early days—we have some courses we’ve been teaching here for 45 years—like shoe making. [LAUGHTER] The outcomes are not that different now than then, except maybe now you’re using a 3D printer to make some of these pieces and back then it was all nails and leather and hands, you know? So, nonetheless, when something is missing, the door is open to paste it in. Now, they’re supposed to notify us when things are missing and we do get notified. Now the faculty who use it like it, but it is an adoption process. One of the challenges is most part-time faculty are not notified they’re teaching a course until anywhere from three weeks, a month, to 48 hours. And so a lot of times they have to use what they have and that means that the Concourse syllabus is less likely to get used. Now, I’ve explained to you how we use it, and before we move on, maybe I should give you guys a little oxygen to ask me questions.

John: One question is, can students access this outside of Blackboard or can they only do it within their course?

Jeffrey: Okay, so the answer is yes. When you create a syllabus there is a link that you can get that’s called a public link, and that link can be shared just like a Google Doc that is view only. So they can use that link. They can print a syllabus, it makes a very nice looking PDF file if you generate a PDF file from it.

John: And is that persistent? If someone say wants to transfer a course into another institution? Can they go back and use that same link to get a copy?

JEFREY: To another institution?

John: Well for example, I’ll have a student who two or three years ago took an online class here at Oswego from me and then they want to copy of that because they’re transferring from one school to another. Is there a persistent link?

Jeffrey: Yeah they’re better off making a PDF because the students only access Concourse through either public links that are deliberately shared by the faculty or through Blackboard and we close our Blackboard courses about a month after the end of the semester. So they would not have access to their course. And the link, I don’t think the link would work, but I’ve never tried it. You’ve given me something to add to my list. [LAUGHTER]

John: I was just thinking if they could get it, it could save faculty a lot of work because I keep getting emails from past students who want a copy of the syllabus.

Jeffrey: Wow, I’ve never had that happen. But you know, it’s an interesting thought. I will definitely look into that.

John: Well it’s partly because I have 340 to 420 students in my large class every fall and some of them transfer or some of them were online students who are doing it for some other institution.

Jeffrey: And at Parsons it would take me nine years to have that many. [LAUGHTER]

John: Although that’s just one class but…

Jeffrey: Okay, okay… [LAUGHTER]

John: I still have several other…

Rebecca: He always wins that one. [LAUGHTER]

John: I’m not sure that’ winning. [LAUGHTER]
In a recent podcast, Christine Harrington talked about her book on creating syllabi and so forth and one of the things she noted is that she runs syllabus workshops that are really professional development workshops and Rebecca does the same thing here, that building a syllabus could be really useful in terms of guiding faculty towards backwards design or to better instructional practices. Is that being used to some extent or have you seen it being used in that way at FIT?

Jeffrey: I personally, when I work with faculty for the first time, that’s exactly what I talk to them about. The process of building syllabi from different sources, meaning how you have historically taught the course, what your unique policies are, but the normal natural constraint of abiding by what the curriculum committee approved. So, in some ways you’re constrained, which means you need to understand, analyze, and incorporate the way the curriculum was designed. And then at the same time, we can really engage them in what is the pedagogy that they’re going to use in their course? When they build their calendars we talk about the different types of activities and assignments that they can hypothetically project by doing their calendar ahead of time. And a lot of times, they just haven’t had the time to really explore what are other ways that I can assess my students progress? So the syllabus process allows you to really re-examine everything, kind of like when you clean out a closet, you put things back one at a time. I like that metaphor. You know, that’s a good one. But that’s what building a syllabi is and this product kind of requires that you do that. I can tell you there’s other scenarios though that are being used. For example, some departments have course coordinators who coordinate all the faculty teaching that curriculum. And so the course coordinator will meet with the faculty as a whole and share their syllabi so that the other teachers can actually, if you will, harvest intelligence from them. And because they have a course coordinator, they can touch base as needed on what works, what doesn’t work, and what they need. That’s a side benefit.

John: What are some of the other features that you don’t use, and why not?

Jeffrey: One of the things that it is capable of doing is allowing a department chair or even a dean to audit all the syllabi, to view all the syllabi, so there’s a management function there that allows you to, in effect, take a look at what the syllabi looks like that are going to the students. We don’t do that at FIT. Some departments request or require that the syllabus be submitted each semester by a certain date and others just coordinate and follow up with the faculty without that formality. So the Concourse product allows you to audit and manage as much as is either permitted or accepted within your culture. At FIT faculty really like to hold their syllabi close and they are not all comfortable and this reminds me of the argument at MIT when they went with, you know, opening all their syllabi that there were still professors who will not permit that, and that’s the same thing here. There are some people who said, “Everybody can look at my syllabus,” and others say, “No,” and no is no. So the college does not take a very strong top-down position on that.

John: In my department, at least, the secretary collects all the syllabi for review or potential review. I don’t think anyone’s really reviewing them except when we go through Middle States accreditation and so forth, and then that whole portfolio goes to the evaluators. But occasionally we’ve had a syllabi study on campus where people have gone through and evaluated them, but it was generally voluntary submissions to that.

Jeffrey: Just to give you another perspective, at The New School you’re required to submit your syllabi for every section you teach named in a very specific fashion and the implied consequence of not submitting it could interfere with your reappointment. So they feel very strongly it’s very important. And I agree with you, John. They probably don’t read everyone. But if there’s a problem, they have everyone and they can look at it. And I bet you they do look at areas they’re concerned about.

Rebecca: I was going to say, that reminds me of when you were talking about some of the capabilities of the system that I was thinking having some of these structured ways of having policies and things in the documents and having a repository where they’re all located prevents the ask mom versus as a dad. Like the scenario where students are trying to find the answer they’re looking for. So they’ll just keep asking people until someone gives them the answer that they want to hear, but if it’s really formalized and that process is reinforced and that the policies are reinforced and consistent, that they’re always going to get the same answer no matter who they ask.

Jeffrey: And I’ve had faculty thank me as if I created the product because this takes some of the guesswork out of what they need to do. And I think, realistically speaking, especially as we all look to bring in learners from different stages of their lives and different points in their careers, whether they be right out of high school or they’re coming back for additional learning, that tools like this permit a more consistent product for whoever is coming in from anywhere and it kind of helps support what I’m going to call the shared governance of a faculty that generates curriculum, which is the lifeblood of the college. And that protection is really, really valuable. So many people work so hard to make sure that these courses maintain their relevance and at FIT we’re opening up new degree programs and closing old. And so as we continue to build toward the near future, this product becomes more and more valuable. But I will say on the other side, it’s an organic process and it takes time.

John: What went wrong along the way? What things might you do differently if you were to implement it now?

Jeffrey: First of all, typically a lot of people who are instructional designers would be involved in training people to use a product like this and showing its value within a course design. But the upfront implementation process in a Banner school really requires that you have somebody who understands how feeds work, how to generate feeds, and how to test feeds.The folks at Intellidemia—they refer to themselves as syllabus geeks—will provide an implementation manual, but that manual is best read first by IT. To be perfectly candid, you need that upfront integration to be rock solid in order for everything else to work. In my case, we came in early on the product and their implementation strategy was not 100% clear. And I want to emphasize that my experiences were more challenging than many people who are newer customers, but you do need to have IT support and engagement with it. And I recommend do a pilot with just a small amount of courses so you see how it plays out. Initially we did previews. I did a pilot with 14 faculty. They all loved it to bits but they all had trouble convincing other people to try it. And the organic process of growth has begun to speed up now, but the initial sell was difficult and you can’t push. You have to kind of show value and, “If you build it, they will come” is not always true. We still have resistance in some pockets of the college and our Academic Affairs Office has been very reticent to do a top down push on this. However, I will tell you that one of our Business and Technology departments that has a very high adoption rate, about 80% of the syllabi in that department—and that department has 1200 students. It’s bigger than some community colleges, just that department—and when they had an accreditation review by an organization that directly works with the merchandising and marketing type colleges and courses. They cited this product as being an integral part of the success of the department, the way they are coordinating courses and making sure that everybody has that same syllabus tool at their disposal and is implemented. And so they were congratulated for what they did to the point where I know they’ve encouraged others to do the same.

Rebecca: I can imagine that some faculty pushback would come from the assumption that if most of the structure is there and you can only edit some of the language… Some of the ideas that Christine Harrington brings out in the motivational syllabus, writing from a particular point of view, being warm and welcoming, and not like: “Don’t do this, don’t do that,” kind of language could get lost if you’re not the writer of all of the content or have the ability to do some of those things. Can you address that a little bit?

Jeffrey: Yes, and actually, I’m going to tell you that your interpretation right now is far more constrained than the reality of what we do. When you open up your course for the first time and you see the syllabus template, here’s what you will see that’s generated by the college, let’s say, or by the system. The header which has your course number, your CRN, the meeting date, the meeting time, and the meeting location are all managed through Banner. Then your name will appear as it appears in the system. So that’s always manageable, if somebody’s unhappy about something or they’ve hyphenated or you know, all these things that happen. And then the next thing they see is the course description as approved and the learning outcomes. The only other part of the syllabus that is constrained are the institution’s policies and resources. The faculty have complete freedom without any approval process to then add to the syllabus everything about their absence policies, their philosophy on teaching, their calendar is completely written in their own voice, so it’s really only those three things. The institution’s resources, the learning outcomes, and the course description. Everything else is up for grabs and is used very, very differently by different people.

Rebecca: I can imagine. I kind of had the idea that that was probably the case, but I think a lot of times when we hear a system that’s going to manage these things, red flags come up, and that’s what the assumption is and that can prevent adoption. So thanks for making that more clear.

John: Pretty much all colleges have fixed statements that have to be included in the syllabi. We certainly have them. But I suspect if we looked at all the courses out there, we’d see some were five-years old, some were 20-years old, and some might, perhaps, have never gotten in statements on disability access, and so on.

Jeffrey: And, you know, as I’m sure Christine Harrington has stressed also, many teachers are not trained to be educators. They are practitioners, they are people who have been out in the world, they’re bringing their world experience in, and then they’re being asked to follow the structures of an educational institution. And so we’re actually, by doing this, providing them with, let’s call it the core skeletal needs that every syllabi should have. And let’s be candid, you know, many people take more than one course or one semester to improve their practice, so the better we equip them upfront, the better start they get.

Rebecca: Seems only fair that we scaffold for faculty like we try to scaffold for our students.

Jeffrey: Yeah we don’t talk about that enough, do we? I remember the first time I taught as a part timer. You know, I felt like I had a great experience but I went in and was talking to a bunch of 20-year olds. The only 20-year old I’ve been talking to lately was my own daughter, and that’s not such a good conversation all the time. [LAUGHTER] So with your students, you learn over time that when you’re working with employees versus students, there’s similarities, but there’s far greater differences, especially when it comes to motivation, risk taking, quality of work, and so on. So we need the freedom to learn and to grow and to help each other. It’s a kumbaya moment we just had.

Rebecca: Yeah, that feels very supportive and loving. How have students responded in general? I don’t think we’ve really addressed that.

Jeffrey: Feedback I’ve heard is that the student reaction has been positive. Ironically, some faculty were thinking it was not advantageous for students to see the same format in different classes, but students actually recognize it makes it a lot easier for them to navigate. I’m going to make an analogy that I think works. I’m making a generalized statement here, most faculty who use rubrics have far less problems with their students in terms of their perception of assessment because they know that the entire class is being graded with the same tool. And I think that when they know that all of their colleagues— or their classmates if you will—are getting a product that is also regulated in the fundamental basics, that tells them that their teacher and their friends teacher are dealing with the same basic toolset before they go in and exercise their freedom as educators. Does that makes sense? Is that a good analogy?

John: It does.

Rebecca: Yeah, I think that’s a good way to look at it. It also gives the perception that the faculty… that they’re all at the same level. Or they all have expertise and they’re all to be respected, which can be really helpful.

Jeffrey: Students give great feedback, and especially when they’re asked. I don’t want to digress into it too much, but even in like in an open pedagogy situation where students are really generating content, it can be amazing how they can be so insightful as to the benefits. Maybe we don’t always give them enough credit, but my interactions with students about the product have been, it shouldn’t be a big deal that this should be a no brainer. To them it’s kind of like, “Why is anybody not using it?” You know, and I don’t want the Intellidemia people to be too happy.

John: Because there’s always room for improvement.

Jeffrey: I want them to worry. I want them to worry and I want other people to make a product that competes with them too because we shouldn’t just have a singular product that functions in this level. However, the amount of work that Intellidemia has done to make the plumbing work I think is it truly impressive and it’s kind of, and I’ve mentioned before VoiceThread and you know the three of us talked about VoiceThread some time ago. VoiceThread continues to improve their product in ways that are very impressive, including most recently automatic captioning. And with the syllabus product, a lot of what they’re doing in terms of improvement is related to the simplicity of setting it up, the appearance of the product, the compliance of the product, and also—and this was a weak point for the product for some time—reporting. In the early days, I could not get a report that would tell me exactly how many syllabi had been opened. It took a long time before they were able to do that, not because they disagreed with my request or anybody else’s, but because they were really working on their back-end systems to be as flexible as possible so they can continue to add on. So although we do not use the management tools I’m going to call them, they continue to improve those as well so that if each of you were chairs of a department, you would be able to get an instant picture of how many syllabi are out there and you would be able to view them. I know there’s different points-of-view about that. But I think there’s a difference between looking at something and playing an editorial role at its creation. I think that people who overstep the administration side and start telling faculty what the verbiage should be, or what the emphasis should be, they’re tread ing in very dangerous territory. And that’s true whether or not you’re using Microsoft Office, or Acrobat, or Intellidemia, right? It’s really a principle, it’s not unique to any one product.

John: I do get reminders for sending in one of my syllabi every Spring because I have the students develop some of the syllabus on the first day of class. So, I just have to remind the secretary for my department that the syllabus will be coming as soon as we have a chance to finish putting it together. But again, that’s not unique to this, because even without the system, secretaries can be monitoring to make sure everyone has submitted their syllabus.

Jeffrey: But you know, there’s a good scenario there, John, where even with this product—and, by the way, I do this in my class—I have a conversation with them about how late is late, and how many absences have an impact, and what does an A mean and a B and so on. So even though they may not be able to alter exactly what we’re doing in terms of what’s required, they do get to change the verbiage on some of this stuff to fit what’s real. And so if you are using Intellidemia, you make those edits either in the classroom or that night. And that’s the way everybody will see it the next time they go in.

John: Right.

Jeffrey: You’re working with a live link. One other thing one, some faculty actually use the syllabi as a living document of what the assignments are. In other words, they update the calendar daily to represent what’s due each and every week.

Rebecca: I do that. [LAUGHTER]

Jeffrey: Yeah so there you go, you would love this for that reason then. So students know that they must go into Blackboard to view their syllabi or use the link that they’ve gotten to view it and not to depend on a static document.

John: We always end the podcast with the question, what are you doing next?

Jeffrey: As far as this topic is concerned, we’re beginning to work with the Office of Academic Affairs which we report up to on having more workshops on the creation, editing, and strengthening of syllabi, and we’re using that as a unifying message about actual course design as well. So if somebody is projecting how their syllabi will impact their students and then they link that or align it with the course design, it makes for a much more powerful subject as opposed to feeling that they’re related, but they’re not connected. And we’re trying to connect them so that’s really what—it’s much more holistic than it is anything else. I will say this that anybody is welcome to contact me if they want more information. I would then be willing to share with them an example of the syllabi for example. So, there you go.

Rebecca: Well, thanks so much for joining us, Jeffrey. It’s always a pleasure.

Jeffrey: It is a pleasure. It’s nice to see you both, and let’s keep that tea going. [LAUGHTER]

John: Yeah, mine is empty.

Jeffrey: I just have to say that I think that you guys do a great job. The series is so relevant, and I’m doing a commercial now for you but it’s from the heart. You guys are really performing a great service and I just encourage anybody who’s listening for the first time to go back and look at the incredible archive of content there that is all relevant and frankly, none of it is older than what, about 18 months John?

John: I think so. I think our first significant podcast was November of 2017, the first week of November.

Jeffrey: That’s right. I have found that it really enhances not only my teaching process, but it also helps me in terms of my work I do as a faculty developer. So thank you both for that. It’s really great.

John: It’s been a lot of fun.

Rebecca: Yeah, thank you for such kind words. And if you want that full list, we do have a page for that now. It’s teaforteaching.com/episodes.

John: Or just go to teaforteaching.com and click on… I think it’s episodes at the top so you don’t have to scroll through six or seven pages of descriptions now. Well, thank you again Jeffrey.

Jeffrey: My pleasure. I look forward to seeing you guys again soon. Take care.

[Music]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John: Editing assistance provided by Kim Fisher, Chris Wallace, Kelly Knight, Joseph Bandru, Jacob Alverson, Brittany Jones, and Gabriella Perez.

65. Retrieval Practice

Retrieval practice has consistently been shown to be important in developing long-term recall. Many students, however, resist the use of this practice. In this episode, Dr. Michelle Miller joins us to discuss methods of overcoming this resistance and examine how retrieval practice may be productively used to increase student learning.

Michelle is the director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Her academic background is in cognitive psychology and her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She’s the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general-interest publications.

Show Notes

  • Miller, M. (2014). Minds Online: Teaching Effectively With Technology. Cambridge, MA: Harvard University Press.
  • Roediger III, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.
  • Roediger III, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1(3), 181-210.
  • Roediger III, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1(3), 181-210.
  • Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966-968.
  • Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 1199327.
  • Kahoot
  • Bray, Niki (2018). 43 to 0: How One University Instructor Eliminated Failure Using Gamified Learning. Blog post
  • Retrievalpractice.org
  • Lang, J. M. (2016). Small teaching: Everyday lessons from the science of learning. John Wiley & Sons.
  • Pennebaker, J. W., Gosling, S. D., & Ferrell, J. D. (2013). Daily online testing in large classes: Boosting college performance while reducing achievement gaps. PloS one, 8(11), e79774.
  • Pauk, W. (1984). The new SQ4R.
  • Thomas, E. L., & Robinson, H. A. (1972). Improving reading in every class. (a discussion of PQ4R)

Transcript

John: Retrieval practice has consistently been shown to be important in developing long-term recall. Many students, however, resist the use of this practice. In this episode, we discuss methods of overcoming this resistance and examine how retrieval practice may be productively used to increase student learning.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

John: Today we’re welcoming back Dr. Michelle Miller. Michelle is the director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She’s the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general-interest publications. Welcome back. Michelle.

Michelle: Thank you so much. It’s so great to be here today.

Rebecca: We’re so happy to have you again. Today’s teas are:

Michelle: Well, I’m drinking Coco Loco, which is a blend from a local tea shop here in Flagstaff, Arizona. Steep Leaf Tea. And Coco Loco is a lot like what it sounds like. It’s chocolate and banana. So tea snobs may scoff at my choice, but it’s wonderful.

John: IIt sounds good.

Rebecca: And I think I saw a nice silver teapot that was poured into a green and blue tea mug.

Michelle: Yup.

Rebecca: A nice tall one.

Michelle: That’s what I need.

John: And I’m drinking ginger, peach green tea.

Rebecca: I went with a Christmas tea today. So Michelle, we invited you here today to talk a little bit about retrieval practice. Can you first start with defining that for us and letting us know what it is?

Michelle: Right. So retrieval practice is essentially the act of pulling something out of memory. So that is, in memory research, what we would term retrieval. So something is stored in memory and we want to pull it out so we can actively use that information, have it in our conscious minds and so forth. And so we go through this usually very fast process called retrieval. So retrieval practice is specifically the act of doing it, and we contextualize that with learning. So, when I’m trying to learn something or I’m the process of learning something, to say, “Oh, what was that fact I remembered, or what can I say about this?” When we do that, it produces something that we can call the testing effect. So this is kind of the clearest example… not the only… but the clearest example of retrieval practice in action during learning is when we sit down to take a quiz, take a test or something like that. So all the excitement that’s happened around retrieval practice in higher education, and really in the rest of education today, is around this finding, which has been replicated many, many times: that tests are, as one person put it, not neutral events in learning. When we take a test on something, that has a very powerful effect on our ability to remember it in the future. So, really simplified down to its core, tests help us remember in the future; when we take a test it strengthens our memory. So that’s what retrieval practice is and it can, as we’ll maybe talk about today, take many, many forms in learning settings. And I did want to clarify too, this is something that I definitely don’t want to take any kind of credit for discovering this. This has been around and has been known about for a long, long time. Some of the big names who are associated with this: Jeffrey Karpicke, Robert Bjork. Roddy Roediger, there are quite a few really heavy hitting cognitive scientists and cognitive psychologists who have established this. But there are many, many of us who are out there trying to disseminate this to other teachers around the world so that we can all tap into the power of this. And I have done a little bit of work in this area with my colleague here at Northern Arizona University, Laurie Dixon, who’s another psychologist… and we teamed up some time ago to look at a very practical implementation of retrieval practice in an Introduction to Psychology course that we conducted some years ago and this is a course, you can imagine, where just trying to get students to perform even a little bit better is a big project. So we examined how even something kind of basic… it was very high tech at the time… but just basic web quizzes that came packaged with the textbook. We said, “Well, if we actually assigned students to do these as part of the course, and if they went through and treated these as opportunities to learn, not just assessments in and of themselves, would that have any systematic impact on course performance?” And we found that in fact, there was a significant improvement associated with that. So that’s kind of the landscape of what retrieval practice is, and why we’ve been so interested in discussing this in the psychology of teaching and learning.

John: In fact, I saw you present on that about 11 or 12 years ago in Orlando at one of the NCAT conferences and it convinced me to completely revise how I was giving my classes and it’s made a big difference and resulted in some significant improvements in student learning. Given that we know so much about retrieval practice. Why are faculty resistant to doing this?

Michelle: Wow, that’s a great question, and that is one that I have been really facing a lot these days in my own practice, talking to other faculty members, and disseminating this through some different activities I do in this area. It is easy for those of us who work in this area, cognitive psychologists in particular, but a lot of us who like you heard about this a long time ago. We’ve seen the power of it. We forget that to other faculty, this can be a very off-putting concept. And so it’s really great for us to think about why that is. And I always think that’s really good for me to kind of go back to that and say “Yeah, not everybody is sold on this idea. And there are good reasons for that.” So like with a lot of things that we talk about in teaching and learning, I think that these really break down into two neat categories: there’s the philosophical issues that people have with it, and then there’s more practical and logistical issues with it. So kind of tackling those one at a time. Philosophically, when people say, “Yeah, I understand about the research but this just goes against something that I believe as a teacher or how I want my classes to be. Here are some ways that that can play out. First off is this idea, that I’ve heard in one form or another quite a few times, and that is this notion of superficial learning. So “Okay, sure, there’s a study that showed that maybe people retain something a little bit better. But surely that’s not this deep learning, whatever that is.” That’s a concept that we all want. So do tests and exams… just testing… create just a superficial form of learning? And while, of course, I understand that, and I absolutely applaud faculty for really thinking deeply about that issue, and caring about it. I Here’s the thing… we got to define that. We social scientists, that’s what we do… we have to kind of break things down and say, “Okay, what does deep learning mean?” and I don’t know that anybody has kind of definitively done that. But when I look at that, I say, “Well, this is not just one research study that showed a little improvement in a lab test. There’s quite a few studies that do use realistic types of materials. It’s not all just contrived laboratory studies. Furthermore, there’s also studies that show that when students engage in more quizzing and testing on material that they actually are able to transfer that learning better. And that is a very, very big deal in teaching and learning as a lot of us know is not just getting students to be able to solve a problem in one context or work a concept in one context, but can they do it in the next circumstance? And that very difficult process is aided by quizzing… and to me, what could be deeper learning than learning that transfers? So that’s part of it. Some of it is perceptions around multiple choice quizzes and tests. There’s an assumption too that if we’re talking about quizzing we must be talking about multiple choice questions… and first off, sometimes in larger classes, those are the final assessments… in that Introduction to Psychology course that we studied years back, that’s what the assessments were… so, having students practice in that format I don’t think that we should necessarily dismiss that. And as we can talk about in a little bit, there’s lots of ways to induce retrieval practice that actually don’t involve multiple choice questions. So, there’s a bit of that as well. And something that I’ve talked about with some faculty recently, too, is this baggage around K through 12. And maybe that’s something that’s resonant with you all.

John: Yeah, that’s given the testing effect a somewhat bad name, because high-stakes testing is being used in a lot of what’s going on with K to 12. But that I don’t think is what retrieval practice as you’re suggesting is all about.

Michelle: Right, and I have to be cautious here. I really like how you laid that issue out in K through 12, that there is a reputation problem… and that has happened because of the high-stakes standardized testing policy in the United States. And I got to be careful because I don’t want to represent myself as an expert in K through 12 or in K through 12 education policy. But I don’t think you have to be an expert in that to know that there’s been a lot of same pretty well justified public pushback against over-testing in K through 12. And yeah, I think that we absolutely do have to be aware of that. Students come to us in higher education… that’s a system that many of them have been through and our faculty are very aware and very cognizant of that too. So, nobody’s a blank slate here, not our students, not our fellow faculty. We have assumptions and ideas and experiences about testing that happen. I think those can be addressed. But, yeah, that is definitely another very big barrier. We got to differentiate between high-stakes standardized testing for the reasons it’s done in K through 12, and low-stakes testing and quizzing for learning as proponents of retrieval practice would have it.

Rebecca: Some of the pushback I’ve heard from faculty fall into two categories related to this as well. One is that they assume that retrieval practice is best implemented in 100, 200 level introductory classes instead of upper level 300, 400, graduate-level classes. And then the other area is that paper and pencil tests don’t make sense and all disciplines. And so they assume that a test has to be in a paper/pencil format, which could be online testing, or it could be multiple choice or it could be essay questions. But I think that, from being someone in the arts, like there’s other ways to test beyond that, but we don’t think of those as tests.

Michelle: Right. That’s another great lens through which to look at this issue; that we do need to broaden the definition to draw more attention to this and to make it a more appealing concept. But yes, how can we make it broadly appeal across lots of disparate disciplines? Not only does it not have to be a multiple choice type of exam, maybe it’s not a pencil and paper exam at all. And we as faculty have to think about what makes sense there. You make a good point about the levels concept. I think, these days most of us have heard of, “Well, there’s one particular Bloom’s taxonomy…” which is a wonderful framework for getting us thinking about being systematic about what we’re asking students to do with the information that we’re bringing to a course and trying to do things like align the teaching we do with the assessments that we have. That’s wonderful. However, I think it does ingrain in us that idea that “Well, just knowing things is sort of at the bottom… You sort of get that out of the way, and then we go on to the good stuff.” And from a cognitive perspective, these relationships are much more fluid and much more interdependent, so that yes, absolutely, the higher thinking that is what we want. That is what we should want. Or if we’re in highly applied disciplines (if we’re in the arts, for example), we need students to be able to do things with that information. But they have to have that. So I think it challenges us to think of new ways with that concept as well.

John: One of the barriers I think some people have is they don’t like to grade tests and so forth. But one of the things you mentioned in your book is that the testing effects been known for a long time, but it was really difficult to implement in terms of low-stakes testing, particularly when you’re teaching at a larger scale. But, as you’ve done yourself, and as you suggest in your book, computer technology makes it easy to automate some of this… certainly more easily for multiple choice and free response and similar things. But it makes it a whole lot easier for both students to have multiple attempts at learning something using some type of mastery quizzing and it makes it a whole lot easier for faculty who don’t have to spend all their time grading.

Michelle: Right and that’s absolutely where the practical stuff comes in. So, we’ve worked through some the philosophical objections, that: “No, this does not turn your classroom into some terrible assembly-line concept of learning. It’s not going to create a bad relationship with your students. It’s not going to simply carry on a legacy of bad policy as people perceive it. It’s not going to do those things.” And then, we do get to: “Alright, but what is this going to do to my life as a faculty member?” …and this is important stuff. Those who know me know I’m a big fan of James Lang’s work in his books. One of his more recent books is called Small Teaching and there’s a lot of different takes on it in that book, but it really hits home with respect to “We do have to think about not everybody’s in a position to, nor should we even try sometimes, to just take everything down to the foundation and rebuild.” That we do really need to think about “Do I want to do this, if it’s going to create 800 more questions for me to grade?” Is this a sort of a situation where, “Well, I don’t want multiple choice, so I’m going to have to give these open-ended questions. I’m gonna have to give feedback and I have 200 students, what will happen? If I am going to go with multiple choice questions… well, how am I going to do this? to have to write all of this?” And yes, it’s one of the most powerful outcomes of the educational technology revolution that makes it workable, and scalable in a sense, even with large classes to do these types of things and to bring them in. So, that is definitely a message that I hope faculty think about if they’re on the borderline of wanting to bring in more retrieval practice into their classes.

Rebecca: I’m in a discipline where the multiple-choice questions are using things digitally doesn’t always work for testing and practicing some of these basic things, and there’s not a good way to automatically grade it. But one of the strategies that I’ve used is actually some self grading, which has actually worked pretty well. I just check and I have them write notes about anything that they got wrong. So, it demonstrates that they’ve tried to understand when we go over it, and I give credit based on how thorough those notes are, rather than whether or not they got the question right or wrong. And that’s made a difference in my classes. And had I not come up with that solution, I think I would have abandoned it because it would have been too much work. But it it actually is working pretty well.

John: When we had a reading group on Small Teaching last year, one of the things that was widely adopted by faculty was a very simple form of retrieval practice where they had students at the start of each class reflect back on what they had done in the previous class. And most of them have continued to do that in subsequent classes as well.

Rebecca: One of the other barriers that faculty might raise is the idea that students don’t take low-stakes things seriously, or that they don’t put the same kind of time into it that they might for something that’s high stakes. Can you talk a little bit about how we might help students find value in retrieval practice and subsequently also with the faculty then?

Michelle: Right, so that, ”Well, we can put it out there, but will they do it?” I’ve kind of crossed this philosophical and practical barrier for myself of giving some credit for pretty much anything that I am hoping that students will do. I put the work in to set it up and I do believe as a teacher that there’s reason to believe that will help their performance, that I need to work it into the syllabus somewhere. I don’t think it detracts from learning, necessarily, to say, “Yeah, there’s some points associated with this.” And especially with our students who, for many of us, are going in a million directions at once. They’re juggling jobs, multiple classes, sometimes their own families. So having an incentive in the form of points—having some kind of a payoff—I think, helps them make that decision that this is at least worth the time to do. I think the other thing that we probably can all do more of, and that I’ve done more over the years, is framing and honestly marketing this to students… communicating with them about why. And when students disengage from an activity like this, when they say, “Ah, why do I have to sit down and do this thing? This is just another test. Oh, no.” …really conveying the excitement and the goodwill that we have in setting those things up can go a very, very long way. Of course, a student, if they just look at it inside and they have no context for why this was put into place, they’re going to have them say, “Well, maybe I won’t do that.” But when we can market to students, we can say “There is a lot of research that shows that this is a very, very good use of your time. And hey, you’ve probably taken a lot of tests in your life that were really about measuring or sorting you and figuring out what you know and what you could do. This is a very different kind of test.” So that can go along way and get the C students nodding: “Alright, I get it. I get why she put this assessment right here.” I think a lot of us have hit on the practical strategy too, that the little Easter eggs or goodies that we plant in the form of questions that get re-used on the higher stakes assessment. So most of us will have tests for measurement at some point in our courses. And yes, students really do pick up on it when you use one, two or more of those items that were in, say, the gamified quiz that you ran in class or the reading quiz that they did beforehand. They can see those and say, “Oh, wow, I got feedback on that. I got an opportunity to practice…” and if it draws in a few more students who see it as almost a legitimate form of cheating, honestly…. like a fun and sanctioned form of getting an advance sneak peek at the exam, then great! Then that maybe is an opportunity for them to come in and see that. So there is that. Actually taking it seriously ourselves, not just in the form of saying, “Well, here’s the points I’m going to give you for this,” but spending class time on it. That’s a big bridge for a lot of us to cross, right? Because we as teachers tend to be very focused on “Oh my gosh, class is for covering material. We use all these sort of distance metaphors to talk about what we want to do with our class time. But if I say, “You know what, guys, I believe in this, and I believe in it enough to where we’re going to spend the entire class period before the final exam….” I did that twice last week myself, when running exams. Or we’re going to spend five or 10 minutes of the beginning of every class period doing this, as one project recently published about doing. If you show yourself doing that, and offer them that, I think that also goes a long way towards it. And I guess to just say, well, taking it seriously… here again, what does that look like? What does that mean to different people? And we can kind of a little tongue-in-cheek say, “Well, why do we have to take it so seriously?” Sometimes games and learning can happen when it is presented in a more fun context. So not everything has to be deadly serious or spending hours and hours and hours of stressful time on. There are occasions when a light-hearted approach can be perfectly good and can still get us involved in that really critical activity of retrieval

Rebecca: I can share an example of doing that in my classes. We’ve done design challenges and sometimes we challenge other classes that are happening at the same time. That reinforces some of the basic principles that we think that students should be doing and reminds them of it… and they might work in a team…and then we have a competition. And it’s fun and what have you. And students like those, it breaks up the day, it makes it more fun. And then I’ve also done things where I give class time to do little design challenges in class that might be individual and then they can level up to working with a partner to finish solving a problem or something… and students value that. They recognize that next time they’re trying to do a project on their own,that it’s easier because they’ve had that practice or that opportunity for the retrieval practice. And my students have actually ended up asking for more of those opportunities.

Michelle: Great.

John: Could you go back just a little bit and tell us what you did in those couple of classes last week before your final. We’re recording this, by the way, in early December during finals periods in both of our campuses, but we’ll be releasing it a few weeks later… to put that in context.

Michelle: Oh, okay. Well, I’d love to. …and I’ve been talking about retrieval practice as you pointed out for years, and I still discover new ways to infuse this into courses. And the context for this is my cognitive psychology undergraduate course. It’s a 200 level. So it’s a lower-division course. And it’s about 60 or 70 students. And as you can imagine, it is a bit of a tough sell. For many of the students it’s their first encounter with this side of psychology. It’s not as intuitive as some other areas of psychology. So there’s a lot to learn and a lot of motivation to be done. One of the ways we bring this and in this course is using a technology called Kahoot, that’s spelled K-A-H-O-O-T,, and it’s really very intuitive and functions very smoothly… relatively free of bugs. That’s good stuff. It’s a program for doing gamified quizzes of various kinds. What I did, and at different points in the semester, and then really amped up in the last week of the semester is running these gamified quizzes. And this is something, by the way, that I hit on and got the idea to try based on a colleague named Niki Bray, who’s from Tennessee, and has actually done some really systematic work in reformulating some of her courses around in class quizzing in just really ingenious way. So I saw some of her presentation and I said I’ve got to try this for myself. I went with multiple choice questions. Kahoot does have some parameters… questions do have to be short…very, very short. And to some people that may be off-putting, but you can put together quite a few of these. And so we would put this up on the projector and students have the option of dialing with either their laptop or their smartphone and weighing in on each question. The neat thing about it is it has an algorithm for giving points based on your speed as well as your accuracy and it’s got a little leaderboard so you can actually have a little in-class competition. Now some people who use this do require all students to do it and they actually issue points for performance. Now, I presented it very much as a practice activity… and made it very, very clear because of my philosophy, I’m not going to assume that all students have devices or have smartphones or laptops or that they want to do that. But I said, “Look, remember we’re talking about retrieval practice, guys. So the real meat and potatoes of this is not buzzing it on your phone. That’s fun. But the real benefit of this activity is what you’re doing sitting there in your seat. And you could be doing this with a piece of paper if you want.” And that is what a few students opt to do. They try to answer the questions, they know what they need to go back and review and so on. And it’s nice because it spits out at the end, a whole report that tells me right away… Okay, which questions do we need to revisit? Which ones did students have the hardest time with and so on. That’s one of the things that I just brought in. And yeah, it was a big deal. I sacrifice a chapter of “coverage” so that we would have more time at the end of the semester. But to me, I would rather have students going into the final knowing that they’ve had this retrieval practice and they have a better chance of performing really well. Earning a good grade on this material I care about than honestly cramming in a little bit more mileage in terms of the quantity.

Rebecca: Sounds like fun.

Michelle: You know what, it does really bring a fun factor. There’s been a lot of different variations on in-class polling, and I will admit to this, I actually purchased and am the proud owner of a physical buzz-in quiz device, complete with a whole spaghetti nest of wires and an incredibly abrasive, buzzer sound, and everything. So previous to this, my educational technology did include… I think I could have up to eight intrepid volunteers who would play a quiz game and then I would have to appoint a points keeper and all this… and props are always a lot of fun. So when I say I believe in bringing in retrieval practice, I really do walk that walk. But I will say doing it by a smartphone does allow for more participation, and I don’t have to worry as much about the minutiae of scorekeeping and stuff like that.

John: I played with Kahoot a little bit at my classes at Duke and students have loved it.

Rebecca: We’ve talked a little bit about framing things so that students take the practice seriously. What do we do about the students who just push back, it’s like “This is too much work. This is a lot of extra time…” or that sort of argument.

Michelle: I think that that’s another piece of that barrier to more faculty adopting this, not just the work involved, but realistically, student opinions. Student evaluations matter a lot to faculty life. And of course, we all want to have that wonderfully rewarding semester, not the semester where we feel like we’re at odds with our students. So I do think a little piece of this is we do anticipate sometimes worse and more pushback than what actually happens in the end. So I think we have a fair amount of sort of a dread factor when we go into something new like this. But that said, when students have an issue with more quizzing or more testing, here’s how those come out. I think, first of all, we do have to sometimes separate out the technology aspects of it from the quizzing or testing, per se. So, as John mentioned, this is one of the amazing things that educational technology does. But the flip side of that is, if you’ve ever used technology for education, in any shape or form, you know that it breaks down. And when it breaks down at a big class, that’s a headache for everybody, it’s a misery. So if you’re trying it, and things are not working out, you just got to figure out “Okay, how much of this is the assignment, the activity per se, and how much of it is that wonky quizzing thing that I got from the publisher and it fell apart, or students hanging up when they’re trying to dial in with the poll and fine tuning those. I mean, when I first used kahoot, I decided it would be a lovely idea to put four chapters worth of material into a 40-question quiz. And when you get used to as you know, that they took a long time, I mean, 12 questions can keep you going for a very long time, especially if you’re discussing… and some students got kicked out part way through and then they weren’t on the leaderboard and I could have set the whole thing aside. But really, that was more I needed to get the technology working. So that’s a big piece of it. I think that sometimes it is a perception issue with the timing. Now, those of us who are just all over this as a teaching technique, we like to do reading quizzes before we talk about that material in class. So chapter three is up on the syllabus and your chapter three reading quizzes due on Sunday night before we do that…, and that can provoke a fair amount of confusion and honestly griping with students. They say, “Why did I get tested on this when we didn’t do it… we didn’t cover it in class yet?” And what do you know, that’s another framing and communications issue. Once we know that that is why we’re doing that you get so much less on that side of things. But here too we do also ourselves have to follow along with what we say. If we say you’re doing this reading quiz so that we’re establishing a foundation. we don’t have to teach everything in the class itself. We could spend the time applying. Well, guess what, then you do have to do that. So if I assign you to do the reading quiz on chapter three, and then on that Monday morning I go, “Okay, we’re going to go over chapter three starting at the top. And I’m going to show you all the PowerPoints for all these things that you already read.” Well, yes, then student morale and student support will fall apart at that point.

Rebecca: Those are very good reminders.

John: Yeah, I’ve pretty much adopted this approach all through my class beginning when I first saw you present it. So I have students do the reading in advance, I have them take reading quizzes with repeated attempts allowed on those. And then in class, I have them working on clicker questions, and I and the TAs go around and help them when they get stuck on problems. But there is an adjustment and students, especially when I first started doing it, would generally say, “But you’re not teaching me” …and you do have to sell students on this a bit. One source of resistance is that when students take quizzes, they often get negative feedback. When they read something and they read it over again, it looks more and more familiar. They’ve got that whole fluency illusion thing going and they become comfortable with it, and they feel that they’re learning it… or similarly, if they hear someone give a really clear presentation on a topic, it feels comfortable. They feel like they’re learning it until they get some type of summative assessment where they get negative feedback, and then they feel the test was somehow tricky. But it’s a bit harder to convince them that actually working through retrieval practice, watching videos at their own time and pace, reading material as needed, and then spending class time working through those problems is as effective. I’m getting better at it. But it’s been a long time trying to convince students and I did take a bit of a hit in my evaluations, especially the first few times.

Rebecca: Do you mean learning’s not easy?

John: Students would like learning to be easy.

Michelle: You know, it’s funny, I think almost in a way… see what you think about this idea. But it’s almost a mirror image of our illusions as teachers, right? That I gave a wonderful clear lecture and I assigned wonderful readings and I saw students highlighting them. Therefore, students must have assimilated this knowledge and it must be in there. So I think it challenges our students but it also challenges us as well. It can be quite an eye-opening experience to running something like a Kahoot. And that brilliant point that I gave this great example for…. what do you know…. 7% of the students actually nailed it. And so we can all use a reality check… teachers and the students. And you mentioned re-reading, that’s another one where I’ve had some pretty intensive conversations with other faculty, I’ll kind of say, “Oh, well, and there’s this great research that shows that students tend to re-read when they’re studying and we know that from a memory standpoint, that is really, really ineffective.” And faculty will say, “Whoa, whoa, whoa.I want my students to be re-reading.” Of course, now when they say re-reading, they may be picturing deeply interrogating a text… annotating it… looking at it from a different perspective. And absolutely, that’s a wonderful part of scholarship. But that isn’t what we’re talking about. We’re talking about students re-reading as a study form and mistaking highlighting for deep interrogation. So, just like with the rest of knowledge bases versus higher-order skills. This is another words “both and” it cannot be “either or.” Yes, we want students re-reading in the right ways, but students or teachers cannot mistake that for learning sometimes.

Rebecca: So, let’s say you’ve just convinced all of our listeners that we need to be doing this, how do we bake it into our course designs?

Michelle: Well, I think that, really getting creative with this, and as I talked to faculty when I visit other schools or talk to faculty of my own institution, I just see all of these new ideas all the time about how to do this. Once you do get that critical epiphany of alright, a test doesn’t have to look like a test on the surface… it does not have to be the ritual of “Okay, you’ve got a number 2 pencil that’s breaking and I’m standing over you while you have a panic attack for an hour.” That’s not what it is—that it’s about the retrieval. Once you get that, all kinds of creativity opens up. So I’ve also started sometimes on the very first day with the syllabus quiz. So, especially if you have a smaller class, we all struggle with that “I spent all week writing the syllabus and it’s incredibly important, and then I’m getting questions about stuff that’s on it all the way through the semester.” So, really on the first day, what I do is I take a very light-hearted approach and I divide the students up into teams, just physical teams. Everybody’s got a copy of the syllabus, sometimes I fake them out and I say, “Okay, we’re going to go over the syllabus point by point…” and I say, “No, you can read the syllabus. We’re actually going to do this other thing which research shows will actually help you remember it.” So the task here is that each team formulates a set of questions… you can make just a few. like three… so a little bit of teamwork. So formulate three good questions off the syllabus. And you’d be amazed, they come up with questions that even I can’t answer sometimes without having to cheat. So they talk about it. And then of course, you go around the room in some arrangement and each team gets to ask the other team their questions, and if you really want you could keep score and everybody loves bragging rights for being the team that stumped the other teams and won the most points. So, it can literally start right then. With that idea that I don’t read stuff to you in this class, this is about you. And I’m also not piling a huge amount of work on myself either apart from being the moderator and the MC and having written the thing in the first place. I don’t have to write questions. They’re writing the questions and they’re answering them too. So those are some of the ways that we can do that. I think especially in our larger classes we do want to think about things like peer grading or peer review of open-ended question responses. We do want to take advantage of things like publishers’ test banks to set those up as reading quizzes. And you’d mentioned earlier about “Well, what about this not being suitable for upper-division classes?” I had an upper-division class that just wrap this semester where we had reading quizzes as well. It may have been an upper-division course but it also serve that purpose of “Hey, you take a basic quiz over the chapter on Sunday and that really sets the stage for us to have a more substantive discussion.” All of those things. are ways that we can do this. I think open-ended reflection as well… so tests that look nothing like a test but are still retrieval practice. You’d mentioned about reflecting on what you learned last class period. So this sometimes goes by the name “brain dump.” And I did a bit of this last semester as well. So, this by the way, I do want to credit the great website retrievalpractice.org. So retrieval practice is actually that high profile it has its own website. It’s an absolute treasure trove of ideas. So, with the brain dump the way that I did it, or similar to what it sounds like you did, every now and again we start off class with you writing down everything you remember from last time on a piece of paper. I had students then turn to their neighbor and compare notes and see what they came up with. And I didn’t grade those. This was not a heap of grading for me. But in the end what they did turn into me for accountability and a few points was a very short reflection, just on an index card. So, “What surprised you?” and I review those and they’re very eye-opening but again I’m not there to police or micromanage what they put on their cards. So, that’s retrieval practice too. There’s lots of different flavors that we can bring in.

John: I do something similar with asking students to reflect on the reading in each module we work through. But because I teach a fairly large class, I didn’t want to have to deal with all the index cards. So I just have them fill up a simple Google form. And then I can skim through it and assign grades much more easily than shuffling paper. But it’s the same basic idea and it’s worked quite well.

Michelle: Absolutely.

Rebecca: Do you have any other examples of the kinds of little tests that faculty can run that don’t look like tests?

Michelle: Right? So tests that don’t look like tests. Well, here’s another that was probably a little bit more practical for a small class, and this is how I ran this. It, like the brain-dump exercise the way I ran it, was also very cyclical and very student generated. So we started out each class at the beginning of each week with students generating a set of quiz questions based out of the assigned reading. So we didn’t actually, in this particular example, have pre-quizzes or something like that. But students came in knowing that they can bring their book if they wanted. But they would need to sit down and write for me three questions, whether short-answer or multiple choice. Then I can flip through those and I would really quickly put those together for a quiz that went out the subsequent day. So, we’re alternating between generating questions and then returning to those questions. And then I would have pass out. I told them treat this like a realistic test. Actually try to retrieve everything, but, you know, when times up, you’re going to get to go back to it. And then they would grade it themselves. So I didn’t actually do the grading either. And that is really great for spurring discussion. And “Oh, my gosh, I thought I knew the difference between reliability and validity, but now that I tried to answer it, I realized that I didn’t.” And then you can throw it back out to the student who now has bragging rights for having had their question selected and say, “Hey, what did you mean? What was the right answer? And why did you put that down there?” And at the end of the day, they kept that quiz too. So it was really very much in their own hands to do. So there’s that. Other creative ideas that I’ve run across over the last couple of semesters… There’s a great project out there, run by Bruce Kirchhoff at University of North Carolina at Greensboro. So I got the wonderful pleasure of talking to that group last year. And he and some colleagues working in the area of botany actually put together a freestanding, custom built mobile app that students could take with them that presented different kinds of quizzes over the sorts of things that botany students really need to know like the back of their hands like how to identify different plants and how to discriminate among different examples and they found some empirical evidence that this actually raised performance up quite a bit. I’ve heard too of another Professor put together some surveys in Qualtrics, the surveying software took advantage of its ability to actually text message people and send them the questions. So this was an opt in sort of activity and it’s one that they didn’t just have it run 24-7 because it was a little intrusive, but what it did is it sent students questions that they could answer at different intervals throughout the day, which also takes advantage of another principle from applied memory research which is spacing. So students are getting these unpredictable questions, they have the option to answer them, and they could be happening even when they’re not in class. So those are some other ways. Some of them look like tests, some of them don’t. But those are creative ways to get students engaged in that practice.

Rebecca: As a faculty member, we often advise students and mentor students who might be struggling in other classes, ones that we don’t have control over. Are there ways that we can help those students use this methodology to do well in those classes where it might not be embedded?

Michelle: Right. And that’s so great that you bring that in as well because ultimately that is what we need as teachers and that maybe circles back to yet one other piece of objection that I’ve seen… actually this time in a published article from a few years back that said, well are we doing too much for students? Are we scaffolding them too much so they’re going to grow really dependent on these kinds of aids like reading quizzes, and reading questions. But if we also have in our mind very intentionally, that what we want students to have at the end of the day is also something they can walk away with, I think that we do have to be very mindful of like, “Okay, let’s not create the impression that just because retrieval practice is so important that you have to sit there and wait for me to put together this specific kind of reading quiz for you.” So I think here, the really powerful message is once again that one students take this to heart… once they’ve not just been told this, that “Oh well, you should quiz yourself as a study strategy,” but they’ve seen that I believe in it to the point where I’m going to put time, energy, and work into it as part of my class. And maybe they’ve even seen the results… they’ve now had their own little before and after experience of what happens when I do this… that they can be more likely to take this forward. So having more faculty across the curriculum endorse this is a powerful idea that “Hey, this is how your mind works. This is how your brain works. Your brain doesn’t just soak up stuff that’s in front of it. You soak up stuff that you have to answer questions about.” That I think is going to be a powerful message. And there’s actually another article out there that I just absolutely love it was done by a psychologist named Pennebaker and some colleagues at University of Texas at Austin some time ago. They replaced high-stakes assessments in their Introduction to Psychology course with these mobile in-class quizzes that were done every day. And one of the things that they report in this article is not just that students did better in that class, but that certain subgroups of students actually showed improvement, specifically closing of achievement gaps, really, in classes that they took after the psychology course. And these were classes that we’re not even in psychology. So you think about that for a minute. How does that happen? Now, nobody knows exactly for sure, because this was kind of an unexpected finding in the study was my impression, but a real possibility is that when students have sat in this class and every single day they have seen the power of taking a quiz… of spacing out their learning… and attacking it in a very active learning approach… Once they’ve seen that happen, they’re more likely to go home and say, “You know what? I can do this in my biology course too. I don’t have to sit there with the teacher’s quiz. But, if I attack it in the same way I might get the same results.” So I think with those things, we can have students walking away with that enduring practice that we want them to have. And it is funny too, because it brings up one of the, I’ll just say it was a really heartwarming teacher moment that I had this last semester when I did bring in these Kahoot quizzes. In the run up to the first exam, I had done my fancy little in-class quiz and was kind of patting myself on the back of what a great leader I was. So I came in to, I think it was the class right before this exam, and I come plowing into the classroom and it’s dead quiet. And there is a student at the front of the classroom. He has commandeered the podium and the computer and what has he done? He’s accessed the Kahoots, which I gave them the links. You know, they’re out there for all the students to see. And he is running them and the rest of the students in the class are taking them just as seriously as when I was administering them. So they’re running through the questions again, giving it another shot. I didn’t tell them to do this at all. It was one of these moments where I just backed out and closed the door after me. And I came back in when they were finished. And so I think those are the kind of moments that we can set ourselves up for when we really do bring this into our classes and we get behind it and a really authentic way.

Rebecca: I think one thing Michelle, in the way that I asked the question, I had also asked it from the perspective of a student hadn’t had the experience of doing a retrieval practice in a class. So, if you’re working with a student, maybe outside of class or as an advisor, are there things that we could do to help those students adopt those practices, even if they haven’t seen it modeled for them?

Michelle: So how do you help students when they haven’t had some of these experiences on their own? And I think this is a part of a bigger package of goals that I think a lot of us should have in supporting our students to really put students in the driver’s seat. To say, “Yeah, you don’t have to wait for a certain style of teaching or certain subject material in order to succeed with it.” I see that isn’t really part of a larger package of growth mindset honestly. So what can students do to make themselves the masters of this? Now, I think that there are some old standard, very traditional, approaches that are worthwhile. And when you look at those approaches, sometimes retrieval practice is at the core. So it may not be a matter of trying to get a very unfamiliar set of terminology or anything for students before, but really getting them to look at some of these approaches in a new way. So things like you’ve ever heard the term SSQ4R or PQ4R, there’s a couple of those that have acronyms and the things they have in common are that they tell students when you sit down to study or read a text or prepare for a test, here’s what you do. You don’t just start reading from the top with no goal in mind other than “Oh, I want to get a good grade later.” What you do is first of all, you set yourself up with questions. Which is, after all, a lot like retrieval practice. Start with a question, say, “What do I want to answer? Can I answer that now?” And if not, why not? To read your text or go through your material very intentionally around this questions, and then there’s always this piece of recite or review. That’s what one of those Rs stand for in some of those traditional systems. And that means closing the book. That means closing the book and saying, “Okay, I’ve sat here with this material for this amount of time. What can I actually say about it at this point?” So, sometimes directing students back into some of those and saying, you need to adopt these strategies, which are completely teacher-independent, they’re fairly discipline-independent as well. That can be good. If you’re doing them for the right reasons and with the right approach in mind, that’s very good. Encouraging students to take advantage of the publishers’ companion sites… Now this is a little bit more of an uphill climb. That is where we run into “Well, if there’s no points for it, Why should I do it?” But encouraging students to say “Look, you already paid for this textbook. To be crass about it, you paid all this money, did you know there’s a website over here and if you interact with those materials in this particular way that’s really, really likely to pay off for you?” So, besides just ensuring that our students have what I consider to be these basic foundational pieces of knowledge about the mind and brain: that we remember through testing, we remember through retrieval and an active engagement. All students should have that, but those are some specific things that we can counsel students to use across all their studies that really should pay off.

John: Do you have any other suggestions for those faculty that are thinking about expanding their use of retrieval practice?

Michelle: You know, just to really encourage and support faculty who are starting this journey, as it sounds like that you all have, to really re-examine how we can bring in this incredibly powerful principle, and to really reassure each other that “Yeah, this is not about really just piling so much more work on yourself.” We can sometimes even just re-examine the assessments and assignments that are already in the course and so that’s kind of one last piece of practice that I think that a lot of us can really stand to bring in. And that was a big thing for me as well to say, “Well, if I’m going to administer these tests… we administer tests, we administer midterms anyway… what else can we do to increase their value as learning experiences… as learning events. So here’s another where I’ve brought in various forms of test discussion activities. Instead of standing in front of the class with that deadly the day-after-the-test class period where I say, “Let’s go over it” to realize, you know what? That probably will not work that well. But one of the amazing things that we have learned from the research on retrieval practice is that it’s not just in the moment in the taking of the test, where this advantage to memory happens. It actually creates a sort of a receptive window for learning and for review when we’ve just taken a test on something. So if you’ve got a midterm in your class already, well, hey, why not carve out the time after that test is up to say give it back to the students… like I photocopy the exams before I’ve graded them. They have no feedback on them whatsoever. And I hand them back to students as a group discussion exercise. I say “Alright, here’s a blank copy of the exam, your group can fill out this blank copy together, just knowing what you know, revisiting all those questions, having those good discussions about what you understood and what you didn’t. And I offer a little bit of extra credit for really good performance on that. And there’s other ways to work that out so it actually takes off as a small group exercise in class. But regardless of the specifics of how you make something like that work, the spirit of it is the same. That when you sat down to do this, to try to drag all this information out of memory, that is not an end unto itself. It should be part of a bigger picture of learning in the class. And it’s sort of an untapped vein of potential we have as teachers and that our students have as well and that we can access it regardless of what the discipline is or how the class is setup.

John: …and students see it’s very relevant. I started doing something very similar in my econometrics class last year following a suggestion from Doug McKee who had been doing that in his class. And it worked remarkably well… and turned what was normally a pretty unproductive class period where we’d go over the test… and the people who did well with just be really happy and pretty much ignore any discussion, and the people who did badly were just sitting there unhappy and not really being very receptive. But when they sit there, and they’re explaining it to each other, it seemed like a really ripe time for them to learn the material much more deeply.

Rebecca: I remember the first time that you implemented that. You sent me a text message with a photograph of his students taking a test and it looked very active. [LAUGHTER}

John: …and they were having fun.

Rebecca: Yeah.

John: One of the nice things that came out as I was wandering around listening to their conversations, and I was hearing people say, “Oh, yeah, now I see where that came from.” Or someone would say, “Well, when did we learn this?” And then someone else would say, “Well, remember when we worked on these problems?” …and it just helped them make connections… and the power of peer instruction is so remarkable. I’m going to do it in as many of my classes I can.

Michelle: That’s perfect. That’s what we want as teachers, and that’s what our students benefit from.

Rebecca: So as you know, we always wrap up our podcast by asking what’s next?

Michelle: Oh, wow. So I am looking forward to kind of rebooting a course that I have not taught in some time. My senior capstone course in technology, mind, and brain. And that is a fun one. But it’s one that is going to take a lot of revision, since it is technology and things change so rapidly, as we all know. So that is going to be a big part of next semester. I also have a crop of research projects in various angles on teaching, learning and educational technology that I’m really excited to be moving forward in the next calendar year. One of those… really foremost among them is a project on virtual reality for learning. We have an incredibly creative and dynamic team looking at virtual reality here at Northern Arizona University. They put together an amazing series of interactive exercises that are part of the organic chemistry course here and teach some of the challenging concepts in that extraordinarily challenging gateway course. And so we now have a whole set of data from students who went through and did this at varying points during the semester. We got their feedback, we’ve got all different kinds of psychometric measures that we gathered from them at the time as well. So, I cannot wait to be tackling that and looking at all kinds of angles on how this part of technology is impacting student learning in this course.

Rebecca: Sounds like some exciting adventures.

John: That sounds like a wonderful research project. …looking forward to seeing what you find.

Rebecca: It was a real pleasure to talk to you again, Michelle. Thanks for spending some time with us.

John: It’s wonderful talking to you. Thank you.

Michelle: Oh, likewise, always a pleasure to talk to your listeners.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

61. A Motivational Syllabus

Do you wish your students knew what was on the syllabus? In this episode, Dr. Christine Harrington joins us to explore how we can design a syllabus that helps us improve our course design, motivates students, and  provides a cognitive map of the course that students will find useful. Christine is a Professor of History and Social Science at Middlesex College, and is the author of Designing a Motivational Syllabus (and several other books related to teaching, learning, and student success). Christine has been the Executive Director of the Student Success Center at the NJ County of Community Colleges.

Show Notes

  • Harrington, C., & Thomas, M. (2018).  Designing a motivational syllabus:  Creating a learning path for student engagement.  Sterling, VA:  Stylus Publishing.
  • Bain, K. (n.d.). The promising syllabus.  The Center for Teaching Excellence at New York University. Retrieved from: http://www.bestteachersinstitute.org/promisingsyllabus.pdf
  • Listeners to this podcast can purchase Designing a Motivational Syllabus at a 20% discount by visiting the Stylus Publishing order page and using the offer code: DAMS20. This offer applies to the paperback, hardcover, and ebook versions and is valid through 6/30/2019.
  • www.scholarlyteaching.org  – Christine’s website.

Transcript

John: Do you wish your students knew what was on the syllabus? In this episode we’ll explore how we can design a syllabus that helps us improve our course design, motivates students, and provides a cognitive map of the course that students will find useful.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

John: Our guest today is Dr. Christine Harrington, a Professor of History and Social Science at Middlesex County College, and the author of Designing a Motivational Syllabus—and several other books related to teaching, learning, and student success. Christine has been the Executive Director of the Student Success Center at the New Jersey Council of Community Colleges.

John: Welcome, Christine.

Christine: Thank you, it’s a pleasure to be here.

Rebecca: Today our teas are…

Christine: I am not drinking tea; I’m not a tea drinker, I just do water and I will do iced tea once in a while, but not at the moment.

[LAUGHTER]

John: I’m drinking Tea Forte black currant tea.

Rebecca: And I have Prince of Wales today—mixing it up, you know?

John: We invited you here to talk a bit about your book, Designing a Motivational Syllabus, released just this past May, and the syllabus of many faculty tends to read sort of like a legal document and it often tends to be a bit off-putting and some people just provide a list of topics. You have a much different approach and it seems really productive. Could you tell us a little bit about that approach?

Christine: Sure, I’d love to. Thanks so much, John. I really believe that the syllabus is an underutilized resource. As we’re beginning our semester as faculty we always are required to put together a syllabus that explains to students what the expectations of the course are. But as you mentioned, many faculty treat it as a list of do’s and don’ts, making sure that we’re communicating what the classroom rules and expectations are and maybe the course topics, but it often kind of starts and stops there. So, I really think that it’s an opportunity for us to invite students to our course and Ken Bain is actually someone who’s written a lot about this as well, you know, inviting them to the feast. I think that’s what we want to do: we want to give them the excitement and passion that we feel as faculty and get them really excited about the course too. So one of the areas I saw as a gap or missing in the literature was the motivational angle of the syllabus. In addition to providing some really good resources and providing a course map to students, I think we can motivate our students through communicating our passion, telling them a little bit more about what to expect in the course in a more conversational style… by the words that we use… by the images that we include on the syllabus… and then also providing them really helpful information so that they view this as a course that they’re excited about and they will feel supported in.

John: We actually had Ken Bain here about 12 years ago, I believe it was, and he gave an all-day workshop on building a syllabus and I attended that and it was wonderful; much of your book reminds me of that, but you also go quite a bit further and provide a lot more suggestions in detail, so I like your approach. Could you tell us a little bit more about how the syllabus serves as an entry point to course design or redesign?

Christine: Sure. I think that for many of us the idea of redesigning or designing a course for the first time even is quite daunting and overwhelming to really think about how to engage our students and achieve all of our course learning outcomes. So, I’ve used the syllabus as a vehicle for that, as an entry point that I find that faculty find it a little easier if you’re working with a more concrete practical document to help them understand course design. Now, I have been accused by some of my colleagues of doing the old switch and bait, you know, I did a syllabus workshop that really was a course design workshop and they’re like, “Wait a minute, I think that you tricked me here,” and I said, “No, I didn’t, I thought you were talking about the syllabus, and if you’re going to talk about and think about what kinds of assignments and assessments you’re going to use—this is course design.” So you need to have a larger conversation. So, I said it’s an easier way for faculty to begin the dialogue and really take a good deep look at, well, what is it that I’m asking students to do and how does this fit into the overall course as well as the overall program that the course fits within… so, seeing the larger picture in terms of the course and program learning outcomes and revisiting the assignments and assessments and perhaps moving away from some of the “always have done this” kinds of assignments… you know, traditional exa…, paper… presentation… that almost all of us have in our course in one way or another. It doesn’t mean you have to abandon ship, but it is a great opportunity to step back and say, “Wait a minute, are these the assessment tools that are really going to help support student learning and help ensure that they’re going to achieve the learning outcomes that we set forth in the syllabus?” and then the syllabus really becomes the map for students. So it is the document that communicates the design of your course. As you are crafting your syllabus, you’re really thinking about: “What is it that I’m asking students to do? Why am I asking them to do that? and what kinds of supports am I going to put in to place so that they can accomplish those tasks successfully?” So, I really believe that it is a course design tool and that if you do it well, a well-designed syllabus really will show students exactly how they get from point A to B and the kinds of supports that are available to, you know, really enhance their learning journey along the way.

Rebecca: [If] faculty were to use the syllabus as a way to redesign, where in this syllabus should they start?

Christine: Well, that is not an easy question. You know, I think it depends on—I tell faculty they have two choices when they’re thinking about redesigning their syllabus: you can take the big approach, which is the course design approach, which is going to be looking at your course learning outcomes and then looking at what kinds of assessments or assignments are aligned to those learning outcomes… and then what I usually do for this big approach is ask them to think about what are the key summative assessments that you’re looking for and then work backwards from there using the backwards design process and determine what kind of formative assessments they need to use… and then as you start to craft the way your course is going to be developed, I say take your course outline your schedule of what you’re going to do—week 1, week 2, week 3 and start to plug in those summative assessments and then start to plug in the formative assessments that you’ve identified… and then that will help you determine what needs to happen in week 1, 2 or 3 to help them be successful on those tasks. So, it’s really kind of using that backwards design. I like to say it starts with the learning outcomes, it shifts over to the assessments that you’re going to use and then it starts to move into the course outline, you know, or sequence of topics that would be really important. But I said to you there’s kind of two ways that faculty can begin to, you know, redesign their syllabus. This is the big way and the way that I would really love faculty to do it, but if someone’s saying to me “The semester’s starting next week, that’s just too monumental of a task…” You cannot engage in this process in a day or two. It takes a significant chunk of time for you to really re-evaluate what it is that you’re trying to accomplish. So there’s also a lot of takeaways in this book about how you could do things literally in five minutes or less that would enhance the motivation and engage students in learning. For example, I did a workshop on my campus where we had a syllabus redesign summer camp and at the beginning of the semester we had faculty submit their current syllabi and at the end we had them submit their final syllabi… and the transformation just from a visual perspective alone was really incredible. So, if you didn’t even look deeply at the design piece… for instance, having a nice photo to draw students in (that’s related to your course content)… there was a biology faculty member who put this amazing, great engaging skeleton on the front page… and that really was much more effective than having her first syllabi, which had all rules and regulations…you know: “do this, don’t do that…” and a welcome statement and a picture of yourself… really just some of those kinds of elements really can make a difference. There’s some research studies out there that support adding a few additional words like “please come and talk to me” makes it much more likely that a student will come and talk to you, and it communicates that you want them to come and talk to you. There are some very easy fixes; changing it from formal language such as “the professor will” and “the student will” to “I am going to” and “you will do this…” Just using that more personal language can really help. So those fixes are literally… you could do it in five minutes or less if you want to make a couple of minor changes to increase motivation, but the overall course design is obviously a much bigger process, I’m not going to pretend it’s not.

Rebecca: I think it’s always a good reminder that you can always do small things before you can jump into a big thing and that the big thing is, you know, valuable. We were laughing at the beginning of what you were talking about a minute ago because I did the same exact thing here where I did a syllabus workshop that was a complete course redesigned workshop.

John: …and I suggested we rename it in the future as a course redesign or course design or redesign, but maybe leaving it as a syllabus workshop might work.

Christine: Yeah, I think you’ll get more people to participate. It’s less scary. [LAUGHTER]

Rebecca: It’s sneaky.

John: It is. It’s a sneaky way of getting in.

Christine: …and it also really allows faculty to walk away with something very practical… tangible… that they actually have done as evidence of participating in that workshop, and that’s great for administrators to see as well.

Rebecca: That’s a good point. So you talked a little bit about the syllabus as a tool for faculty to help think about organizing their class and redesigning it, making sure that students are going to learn what we’re hoping that they’re gonna learn. Can you talk a little bit about how the syllabus is a tool for students?

Christine: Sure. I think it’s really critical for students to not just take this document and put it aside but to recognize the value that it has… and I will tell you that students who see a more in-depth, comprehensive syllabi have a much more positive perception of the faculty member and also of the student experience of being in that class. From the student perspective, it’s motivational for them to know that they have a faculty member that cares enough to put together this really comprehensive package. Having a long syllabus that does not have any visual tools in it and is overwhelming… whether it’s legalese… that is something that students are not going to use much. But when you create a syllabus that’s motivational and engaging and visually effective, students will use that document and they really will appreciate it. Now, they do need reminders about how to use that. I think that it is a document that all of us quite frankly emphasize in the first day or first week of the semester and then often don’t revisit except for to say “in the syllabus…” which may or may not help a student if they’re having trouble navigating it. So, I’m a very big fan of making sure that we make it a document that is student friendly. If it is a longer document, including a table of contents, so that they know they don’t need to read all of this. Maybe the last half of the syllabus is the rubric section with specifics on the assignment… that they just need to know it’s there when the time comes for them to look at it. So I will often encourage students to view the syllabus very much like their textbook. You don’t need to read the entire textbook during the first week of class, but you certainly need to know what’s in the textbook so you’re not just focused on chapter one. You need to acclimate yourself to all of the information that’s in the text and what kinds of topics and resources are included. Well, the same goes for the syllabus… so really helping them use it as a resource and not feeling like they need to read it and memorize it, but instead use it as a tool to help them be successful.

John: One of the things you suggest is doing a screencast with the syllabus perhaps to make sure that students do look at it and to make it a bit more welcoming. Could you talk to us a little bit about that?

Christine: The screencast, I think, is a very valuable strategy, especially in the online class, but it can also be helpful in an in-person class. We all know that sometimes students are adding and dropping in the beginning of the semester and might miss an important conversation, and this really allows you to communicate about the syllabus to students. In addition, I will tell you that I have had students… I tend to have a fairly lengthy syllabi, as you can imagine, based on my textbook, I like to include a lot of resources… and I have had students say to me, when I got the syllabus via email from you, I really was overwhelmed and I was ready to run away from this class; I thought it was going to be a lot of work because it was a long syllabus, and once you explained it and we started to see the resources in it, I discovered that’s not the case at all. So I think that having that personal touch and the the nonverbals that you can really communicate through a screencast with a web video as well as the audio really does help students understand the value of the syllabus and we have so many great online tools now, like screencast-o-matic, that are free… things like that… that you can really easily do that in a short period of time to do an introduction, and students can refer back to that as they need to throughout the semester.

Rebecca: Can you talk a little bit about the kinds of things you would recommend faculty highlight or introduce about the syllabus on the first day? You mentioned identifying some of the resources and things in it. But, as you know, there’s a lot of faculty that call the first couple class days syllabus days and some of them actually read the syllabus to the students. What would you recommend?

Christine: Well, I certainly would not recommend reading the syllabus to the students. [LAUGHTER] That is not engaging. I think that the part of the syllabus that doesn’t get as much attention as it should is the “Why are we together?” The syllabus communicates: the purpose of the class, the goals of the class, and “What are the takeaways that they’re going to get as a result of being in this class?” Students, what they’re going to immediately want to know is about the grade, right? They go directly to the page that has information on the grade and the assignments that they need to do. But if they go there first, they’re missing the big picture. So, I think that we as faculty have a wonderful opportunity, whether it’s through a screencast or live in a regular classroom setting, to emphasize the learning outcomes of the course in a user-friendly way… not necessarily reading the learning outcomes, but to passionately explain why this course matters so much and the value of the course and the skills, and not just the knowledge that they’re going to get, but really the experience and the confidence that they’re going to get as a result of being in this course. So I really find that to be the most important piece to emphasize, and then helping them see the direct correlation and connection between what it is that they’re going to achieve and those learning experiences. So whether they’re assignments or assessments… the why behind all of those… so they don’t just view it as a big long checklist of “this is what I have to do because it’s a college course,” but they understand that that’s the roadmap that’s going to help them accomplish all those tasks. So, for instance, if I can give you one example, quizzing is something that not every faculty member does, sometimes it’s more of a more high-stakes midterm/final kind of situation… but faculty who really want to provide that opportunity for students to have formative assessments along the way would also include quizzing… and when you do that what happens is is that you’re helping students learn those skills along the way and help them self-regulate whether or not they’re on task to achieve the learning outcomes. But students may view them as busy work or that you don’t believe they’re going to read without being held accountable. By explaining the why in the rationale and bringing some of the research in on the testing effect and explaining to them that the reason for me doing this is because the research shows that if you test yourself you are much more likely to learn that content and it will stick with you throughout much longer periods of time. So providing the why, I think, is probably the most important part that I would bring their attention to and I think that we don’t do that enough as faculty.

John: Just as a plug for a future podcast, Michelle Miller will be a guest in a few weeks where she’ll be talking about the testing effect and retrieval practice.

Christine: Terrific.

John: But that is an issue. Students see testing as something negative; it’s not something they find quite as enjoyable… so providing that rationale is really useful and students don’t always buy it but the more you can convince them and the more evidence you can provide, I think the more likely it is that they’ll see the benefits.

Rebecca: Yeah, John and I have talked about this before that when I started doing that in my design classes, which is a place where testing is not as common, I had students actually asking for more, which I found to be very bizarre initially. You don’t generally have students asking for more tests or quizzes, but when they started realizing how it was helping keep them on track they actually found them really valuable.

John: In helping them assess their learning and to help improve their own metacognition of what they know and what they don’t know, it can be really useful.

Rebecca: One of the things that you have in your syllabus is a teaching statement. Can you talk a little bit about why you include that and why you recommend including that? Because that’s not something you commonly see in a syllabus.

Christine: Absolutely, in fact, there are a couple elements that I think are essential if you want to use the syllabus as a motivational tool, and I see the teaching statement as being one of the key elements; it’s an opportunity for you to start to build a relationship with your students, and it gives you a chance to share some background about who you are and why you’re passionate about the subject and what they can expect to happen during class. As we all know, the professor-student rapport is probably one of the most important predictors of success. Students who have professors who they believe care about them and are interested and engaged are much more likely to be successful than students who have faculty who are much not engaged and maybe not as connected to them. So, I believe that we could use the syllabus to begin developing that relationship, because we often send this out prior to even meeting a student for the very first time, and it also might be something that is shared on some kind of management system within the university or college setting for students to decide which classes to take. So it can invite them to why they should be taking your class –it’s really a wonderful way for you to share a little bit about yourself and your professional background expertise and passion.

John: You also suggest in your book that the syllabus can serve as a communication tool and it also makes it easier to be transparent in terms of how you grade and letting students know this, and that can increase equity, or at least a perception of equity. Could you talk a little bit about that?

Christine: Sure, I think it’s really critical that we are being as explicit and as transparent as we can be. There are going to be some students who can more easily connect those dots than others and when we make the dots connected for them we’re equaling the playing field to ensure that all of our students know what it is that they need to do in order to get to the finish line and how the different tasks relate to one another. So the more you can communicate and ensure that some of the students who may not naturally see those connections can see those connections, I think that really does improve learning and the academic experience for all students.

Rebecca: You mentioned earlier about referencing the syllabus and having students use the syllabus as a tool throughout the semester; you also mentioned early on that faculty have a tendency to say “it’s on the syllabus” without really providing much more guidance than that. Can you talk a little bit about ways that you recommend using the syllabus at farther points in the semester to help support students and continue to motivate students beyond just the beginning of the class?

Christine: Sure. I think that is critical. You know, many of us do activities on the very first day of class. I’m hoping that many of us are not reading the syllabus anymore and we’re starting to get more engaging strategies at the start of the semester. I know folks do a syllabus quiz and things of that nature. I actually think that having a group quiz format,, something that’s more interactive, is great. I do jigsaw classroom exercises at the beginning of the semester on the syllabus. They’re diving into that resource and understanding it and reporting back and teaching their classmates about the different section that they were assigned to. I think setting the stage at the beginning of the semester is really important, but we can’t stop there; we need to then follow through and revisit the syllabus throughout the semester. So what I typically do is I will often ask students to, at the beginning of class, (I always ask them to have their syllabus with them)… and I might give them a few minutes and I do this activity called dusting off the cobwebs… where they have to recall what we talked about last class and maybe from the readings and then we can look forward. So after we clean up our house in terms of where we were then what’s coming next, so what are we talking about today? How does this link up with the concept that we talked about last class, and then what what’s coming up in terms of what’s due? So instead of me putting on the board or on a PowerPoint slide, “Next week, don’t forget you have to submit the first part of your project” or whatever it might be. I’m having students give those daily reminders. So you can literally spend five minutes or less in a class, and maybe once a week; it doesn’t have to be necessarily every class… but maybe Monday’s will be your dusting day and looking forward opportunity. So I think that’s really helpful. The other time where I spend a little bit more time on it is when there is a big project that’s coming up. So at this point of the semester I will often have students work in either a partner group or a small group and in that situation I’m asking them to look at pages 12 to 14 that outline the details related to assignment 1 and the rubric of how you’ll get assessed on assignment 1. I want you to review that. I want you to put that in your own words… tell your classmate about what it is… and then you have an opportunity to ask me any questions about it. So, I’m basically training them to engage in that process. Again, this doesn’t need to eat up a tremendous amount of class time; it can be a few minutes. But by doing that you end up often getting better products to grade which makes your life much happier when it’s time for all the papers to be handed in because even though you put it in the syllabus, it doesn’t mean that they’ve looked in the syllabus… or they knew where to look… or maybe something didn’t make sense to them and they were not comfortable asking without the opportunity given to them in that very explicit way. So, I find that that really as a very helpful process. I also like to do an activity kind of mid-semester looking at the learning outcomes… so, going back to saying “Okay, so here’s what we said we were going to be able to learn and be able to do at the end of the semester. We’re about halfway done. I want you to look at the learning outcomes and do a self assessment. Where are you at on a scale of one to five? What do you need to do in order to get to the level five at the end of the semester? …and some of that’s going to happen obviously in classes or through the assignments. But, what else can you do to ensure that you’ll achieve all of those learning outcomes?” So, I like to use it in a self-regulatory way as well.

John: One of the things related to that is you suggest the use of an assignment grade tracking form. I’ve always kept my gradebook in Blackboard so students can see where they are but students don’t always seem to pay much attention to that. Having them create their own assignment grade tracking might be useful. Could you talk a little bit about what the form is and why you recommend that?

Christine: Sure. I do think that with our current technologies {Blackboard, Canvas, thinks of that nature), the LMS systems really do have a pretty robust gradebook feature where students can easily track their progress. Because in order for them to self-regulate they need to know whether or not they need some external data to see if they’re on track or not. To me, as long as they’re engaged in that checking and self-regulatory behavior, I don’t think it matters whether it’s definitely in the syllabus or in Canvas or Blackboard. But unfortunately, not every faculty member is using the gradebook to its full capacity, so sometimes students are left wondering about their grade and I want them to feel in charge of knowing how to do it. I also think that they have a hard time sometimes seeing the weighting of assignments so that they might view a smaller assignment as being equal to that of a larger one and not recognizing the significance that can have on your grade. In the absence of some of the technology tools… and there are great apps for this too so if your student has a faculty member that is not using an LMS gradebook, they can go ahead and download an app… and I think that’s a great way to track it as well. But just including something like that on the syllabus helps them see the breakdown on the weighting of the different grades so they can see how that final grade is determined.. Because I think you’re right. In the LMS’s I see that students are often looking only at the current calculated grade rather than looking at all of the pieces of how that grade came to be. So anything we can do to help them better understand the grading process and how those elements go into the final grade, I think, is useful.

Rebecca: In your book and also in the example syllabi that you’ve provided (both on your website and also in your book) you talk a little bit about your assignment sheets with rubrics and things completely spelled out… so not something that’s more generalized but something incredibly specific. Can you talk a little bit about the choice to do that and the advantages of doing something like that?

Christine: Sure. So I think that we all provide students with details about our assignments; it’s about where does that happen. For some of us, we think that that should happen outside of the syllabus in the LMS in a different place… under assignments or some other tab rather than being in the syllabus itself. I think it’s really helpful for students to have a complete package in the syllabus. Now, just because I think that doesn’t mean that it’s true, right? Actually I did a really neat study with a colleague of mine Crystal Quillen at Middlesex County College where we examined the student perception of syllabi length and we shared different syllabi. There was a 6-page a 9-page and a 15-page syllabus and they were randomly assigned to different groups. What we discovered was that students who were reviewing a medium or long (which was 9 or 15 page) syllabus actually found that syllabus to be more positive. So they had a more positive perception of the faculty member in terms of being motivated and things of that nature. In addition, we asked the specific question of the students “Would you rather have all of the details about your assignment in one place in the syllabus or is that not what you want? Would you rather just know ‘I have to write a paper’ and then have those details about the paper be provided at the point that you need them and in a different place within your learning management system?” …and 66% of the students said we want it all in one place. I think one of the challenges that our students have is that every faculty member sets up their LMS page a little differently… and I know colleges really work hard to try to have some consistency across the different course shells that exist… but students really do struggle with trying to find that information. If we can guarantee to students that all of the essential information you need about your assignments and your learning path are in the syllabus, then I think that makes a lot of sense. I really think it’s important for faculty and students to understand that it’s almost like an addendum to the syllabus, but it is in that document. So that they don’t need to get overwhelmed by it on day one but they know that it’s a resource… very much, like I had said before, like the textbook is.

Rebecca: I think that’s an interesting point right, a lot of our students are in five classes and if there’s five different ways of doing things and it’s organized five different ways with five different evaluation systems it can get a little overwhelming after a while. It’s a lot to keep track of. We often complain that students don’t keep track of things, but we certainly don’t help it.

John: We’d like to reduce the cognitive load.

Christine: Yeah, that’s for sure. Unfortunately, it’s the case and sometimes we need to get ourselves back in from the student lens to see what does life look like from their perspective. …and if I could just say one other thing about the “It’s in the syllabus” comment… I mean, believe me, I pour my heart and soul into creating my syllabus. My husband often laughs at me because he’s like “Haven’t you taught this course before? You look like you haven’t taught this before…” because I’m spending hours and hours and hours and I just had it last semester but I know it could be so much better. I’m trying to find ways to communicate it better. So, I know that the information is in the syllabus because I put it there. I spent many hours doing that. But if I give if a student is active enough and engaged enough to ask me a question about an assignment or the course and I just say “It’s in the syllabus” my syllabus is a long document. I do need to navigate them to which part of the syllabus it’s in, because my syllabus is probably a little different than other syllabi that they have looked at. So, I feel like it’s so easy and frustrating for us when the students may not have looked carefully before asking us. So that’s a skill we need to help them learn. But maybe they did look and they didn’t find it as quickly as they would like to. Let’s be honest. You and I also are not going to spend an enormous amount of time looking at a website if we can’t find what we’re looking for right away. We’re going to ask someone. So, we want to make it as easy to navigate as possible, and having consistency I think across different courses does help… not that you need to have a rigid standardized syllabus that looks exactly the same in every course. I think you need to have a little bit of room for the flavor and the personality of the course to show.

Rebecca: Those are things that we just forget about. We forget that it can be really overwhelming to look at documents. Yet we all complain about the same thing when someone else makes a document that we have to look at and we can’t find something. So, it’s good to double-check ourselves. So, I appreciate the reminder.

Christine: Absolutely.

Rebecca: One of the things that needs to be in a syllabus to some extent is the policies… there’s college policies that might have a particularly language that you have to include, but then also maybe what some of your own policies are as an instructor. How do you suggest including those in a motivational, inspiring, and supportive way? Because sometimes they don’t feel very inspiring or supportive.

Christine: That’s a great question. In fact, I have noticed that probably one of the biggest demotivators of a syllabus is the policy section… and many times faculty add more and more policies based on negative experiences that they’ve had. So something happens in a classroom setting and they’re like “I need a policy on that, so I’m going to add another policy about that…” and it starts to become this really big long laundry list. And clearly we have to have policies… I’m not saying we shouldn’t but I think the way in which we communicate our policies really do matter. When we have a list of “don’t do this: don’t cheat, don’t plagiarize…” all these kinds of rules and regulations… we’re kind of communicating to students that we think you’re going to do this, so I’m going to set you straight right now… rather than using more positive language. Instead we could communicate the same policy… I like to use the academic integrity one. So instead of saying “don’t plagiarize” instead… if you have a policy about academic integrity and the importance of why that matters so much and how everyone is expected to uphold academic integrity and engage in honest actions… That I think sets a very different beginning to that policy. The other piece is that we sometimes create policies that promote, I think, more achievement gaps… and actually gets back to that question you asked me earlier about equity, because many of our policies do not promote equity. I’ll take a late work policy, for instance… and I recognize the fact that we all need to be timely with our tasks. I mean in the world of work people are going to expect you to complete tasks on time and I recognize that that’s a very important skill. However, I also know that we are all human beings with a life on the side, you know, so that life happens sometimes that may prevent us from being on time with a task. I know I personally have not always been on time. I’m a very timely person, but there have been times when I have missed a deadline and haven’t been exactly where I needed to be at the time I should have been there. It’s not a pattern, but it does happen. So I think we need to have policies that are building in some of the flexibility that communicates to students that we respect them… we recognize that they have a life outside of school or at least outside of our class… sometimes our policies don’t even seem to recognize that they have other classes… like our class is the only one. So, students complain about that quite a bit… thinking that you’re looking at this only from your angle and not recognizing that this is one of many classes that I’m taking. When you think about policies such as that, it’s important to communicate it in a way that isn’t taxing on your time so that you’re taking late work every minute of every day… but is respectful. So, a very simple way to do that is “Here’s the policy. I expect you to be on time with tasks, especially if you’re doing a group task and your classmates are dependent on you.” I tend to be a little bit more rigid with my policies when it’s a group related task versus an individual task. But I also know that life happens and if you are in a situation where you’re not able to meet a deadline, please come and talk to me.” Because, if you put in a policy that says no late work is accepted… everything must be handed in on time. Well, certainly you won’t have to deal with any late work… that helps you with your time management, but it really is inequitable because the student who comes from a culture where it’s fine to challenge authority might come to you and say “My grandmother passed away last week. I have this really horrific thing happened in my life…” and, many of us… I know I myself… I had at some point a no makeup policy. It wasn’t a real policy… if you came to me and it was a good reason then I gave you an extension. But I only did that if you came to me. I did have no makeup policy on the syllabus. So, the problem with that is that there are certain groups of individuals that are not going to challenge authority and take your word at face value. So now you’ve put them at a distinct disadvantage in the class. So I think it’s important for our policy to do a couple of things: one is first of all they should be accurate… so I did not really have a no makeup policy… I had a “makeup if you have a good reason.” So, it wasn’t accurate. So now my policy is much more reflective of my current practices. I expect you to be on time but if something happens, communicate with me and we will see what we can do. It’s not promising them the world but it’s certainly promising them at least the conversation… and second in addition to having it be clearly communicated, it really needs to be equitable so that everyone gets an opportunity and it’s not a case-by-case situation where if they’re more willing to challenge authority they’re going to be more likely to get a positive outcome.

Rebecca: One of the things that you mentioned earlier is shifting the language in the syllabus to something that’s more personal from something that feels more like legalese or something. In those circumstances where an institution might impose a particular policy that’s written in a particular way that doesn’t match the voice of the rest of the document, how do you suggest dealing with that in a syllabus, when it might be required of you?

Christine: I think this is one of the big challenges that faculty face, is when there’s required elements that are not very motivational in nature. So first of all I would say try to start a conversation on your campus about revising that language, not necessarily the policy… that’s what the policy is going to be… but can we introduce it in a different way? So, I would say if you can do that that would be ideal because then that would be benefiting all of the students in all of the classes across your entire campus if they’re required. So I think that’s probably the point of intervention that I would encourage you to take and you could go back and refer to some of the ideas in the text or talking with colleagues about other ways to better word some of that policy language. If you’re not able to switch the language, or you want a quicker fix while that conversation is happening, I think it’s appropriate (and again you need to find out on your campus if it is) to maybe have an introductory statement: “The next section of the syllabus is going to be the institutional policies that every faculty member needs to include.” So, not saying they’re badly worded, but you’re saying that they’re different… like you can definitely see that I need to include these and I certainly wouldn’t have them (unless you’re required to) on the first page or two. Let the more positive motivational pieces be front and center and then have the policies be later on in the syllabus. So almost like you have a cover page or some kind of introduction before getting into the more typical policy language, I guess, if you need to include it I think can be helpful.

Rebecca: I’ve done things where, for example around intellectual integrity, there’s a campus statement (and I label it as such) and then my policy which kind of interprets that and puts it into context and is in my language… so the same idea that you have about introductory or it’s you’re kind of separating the two and making it clear like whose is whose.

Christine: mm-hmm

Rebecca: …and I think that sometimes has helped students too… but I’ve always found that to be jarring.

Christine: Yes. I would agree I think that definitely is, and I like your approach really of summarizing it because sometimes those policies that we get handed down are very lengthy and students probably aren’t going to read them. So, even if you gave the one- or two-sentence summary of what that meant…translation is… you know this is what you need to do… Be honest. you know, engage in honest action. It really matters. We all want a good reputation and we all want to learn. So in order to do that, these are the kinds of strategies that you need to engage in… and going to the integrity topic again, I think so many times students are unintentionally plagiarizing… not always necessarily doing it on purpose. So maybe helping them understand how they can better learn how to cite sources appropriately or how to paraphrase more effectively… So, pointing them to resources that are going to help them with all the tasks.

John: In your book, in addition to providing a lot of great resources about the syllabus and a lot of great recommendations and the evidence behind them you also provide quite a few some very nice sample syllabi at the end of the book and it’s a great resource and your publisher has very gracefully provided us with a discount code to any of our listeners which is DAMS20 and you can do that by going to the Stylus Publishing website. We’ll include a link to that in the show notes.

Christine: John, if you don’t mind I’d like to also share my website which is just www.scholarlyteaching.org. If you go to that website you will see several other teaching and learning resources,including several sample syllabi… and the syllabi that we used in the research study that I mentioned earlier on the length of the syllabus are provided there as well.

Rebecca: There’s also really good videos… a syllabi checklist… There’s some really great resources on the website. So definitely I recommend checking that out…

John: …and also information about your other books.

Christine: Yes, thanks for that John… appreciate it.

John: We always end our podcast with the question what are you doing next?

Christine: Well it’s interesting that you asked that. I am looking at options right now but I am very much interested in staying connected to the teaching and learning space and how we can improve what we do in the classroom. I’m spending some time thinking about moving it up to a higher level and engaging administration in some of the conversations that we’re having about teaching and learning and putting the teaching and learning centers kind of front and center really in conversations about student learning and engagement on campuses. So, for instance I work in the community college system as you know. There’s a national movement called Guided Pathways and this national movement is all about improving the success outcomes of students and it happens to a variety of ways. They talk about making sure our programs are clear, so that they’re defined and students know how to get from point A to point B. They talk about helping students choose a pathway and stay on a pathway, and they also talk about ensuring learning… but, having been a part of this national conversation, the “insuring learning” really is very much an afterthought, I think, unfortunately. I feel like we’re dancing around the classroom. So I’d like to take some of this work that I’ve been doing and work that’s been very directly helpful to the faculty and try to shift it to being helpful to the community college leadership as well as leadership at the four-year universities as well… to emphasize the importance of good and effective teaching practices. So we’ll see where that takes me. I’m not really sure how that’s going to transpire, but I did just present on that topic at the POD conference. I’m putting teaching and learning centers right in front and center in the Guided Pathways movement, getting them at the table of these conversations. So, I’m very interested in further pursuing that at this point.

Rebecca: That sounds like that could be really valuable to a lot of the faculty because translating that information to the administration is always really useful in finding support and all working together to have these really stronger outcomes for students.

Christine: Absolutely.

John: Well, thank you. This has been a fascinating conversation and this is a book I’m going to recommend to all of our faculty.

Rebecca: Yeah, so glad you were able to join us.

Christine: Well, thank you. I really appreciate the invitation and I hope that everyone listening is able to design motivational syllabi and if that happens our students are the ones who will benefit at the end of the day. So thank all of you for listening and for supporting students in their learning journey.

[Music]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John: Editing assistance provided by Kim Fischer, Brittany Jones, Gabriella Perez, Joseph Santarelli-Hansen and Dante Perez.

54. SOTL

As faculty, we face a tradeoff between spending time on  teaching and on research activities. In this episode, Dr. Regan Gurung joins us to explore how engaging in research on teaching and learning can help us become more productive as scholars and as educators while also improving student learning outcomes.  Regan is the Ben J. and Joyce Rosenberg Professor of Human Development in Psychology at the University of Wisconsin at Green Bay; President-Elect of the Psi Chi International Honor Society in Psychology; co-editor of Scholarship of Teaching and Learning in Psychology; co-chair of the American Psychological Association Introductory Psychology Initiative and the Director of the Hub for Intro Psych and Pedagogical Research.

Show Notes

Show Notes

John: As faculty, we face a tradeoff between spending time on teaching and on research activities. In this episode, we explore how engaging in research on teaching and learning can help us become more productive as scholars and as educators while also improving student learning outcomes.

[MUSIC]

Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guest today is Dr. Regan Gurung, the Ben J. and Joyce Rosenberg Professor of Human Development in Psychology at the University of Wisconsin at Green Bay; President-Elect of the Psi Chi International Honor Society in Psychology; co-editor of Scholarship of Teaching and Learning in Psychology; co-chair of the American Psychological Association Introductory Psychology Initiative and the Director of the Hub for Intro Psych and Pedagogical Research. Welcome.

John: Welcome.

Regan: Thanks a lot, Rebecca and John.

John: Our teas today are…

Rebecca: I’m drinking Prince of Wales today.

Regan: Alright.

John: I’m drinking ginger tea.

Regan: Ooh, now you’re making me want to. [LAUGHTER]

John: We’ve invited you here today to talk about research in the scholarship of teaching and learning, or SOTL. You’ve conducted a lot of research on teaching and learning as well as research within your discipline. In most disciplines there has been an increase in the journals devoted to teaching and learning and an increase in research in teaching and learning, but it hasn’t reached everywhere yet. SOTL research is often not discussed in graduate programs and is sometimes devalued by campus colleagues. Why does that occur?

Regan: So. I think there are multiple reasons why the—and I’m going to start with the devaluing. I think there’s a lot of uncertainty about what it exactly it is, so on one hand, when people say a scholarship of teaching and learning… very often if it’s somebody who hasn’t really read up on it recently the sense is, oh, you know, that’s research on teaching; that’s not as good as your regular research. Now, I think that’s a misperception and once upon a time, and here I mean maybe even 15 years ago, there was some scholarship of teaching and learning that wasn’t done very well and I think people have heard about that in the past and that’s why there’s that knee-jerk reaction. Far too often it’s seen as something where it’s not as rigorous, perhaps, or it’s not done in the same way and most of that is wrong. What I like to tell folks who see that is, if you think the scholarship of teaching and learning is not rigorous, well, you haven’t tried to submit something to a journal recently. I co-edit a journal on the scholarship of teaching and learning in psychology and I can actually see some people submit poor work and I send it right back; I do the classic desk rejection and I say, look, this is just not good enough. So my favorite tip for “How do you write for a scholarship of teaching journal?” is very simple: just like you write anything else. There’s a lot of baggage, but I think that as you alluded to, John, it has changed more recently and I think part of what you notice now or what I’ve been seeing is that this kind of work, this kind of examination is being called different things. For example, a term that I’m hearing more and more often is DBER: disciplinary based educational research. And I’m hearing this come out of medical schools and engineering schools and social work schools and many professional programs where they’re doing DBER, which is essentially what the scholarship of teaching and learning is. So, I think because of that baggage with the term, people are calling it different things but in general the work is getting much more rigorous.

John: Excellent, and if changing the name is sufficient to do that, it’s a valuable step.

Regan: I think that’s why, when I talk about it I like to talk about it as: “Do you want to know if your students are learning? Do you want to know if your teaching is effective?” Well, then you should do some research on it. You can call it what you want. I started really calling it pedagogical research because that’s what it was, but it’s truly a rose by any name.

John: And that’s something that Carl Wieman has emphasized.

Regan: Absolutely, yup.

John: In the sciences, you test hypotheses and there’s no reason we couldn’t do the same thing in our teaching.

Regan: Exactly.

John: And that’s starting to happen, or it’s happening more and more.

Rebecca: In some disciplines, the scholarship of teaching and learning is not accepted as being part of their tenure and promotion file, for example. What would you recommend faculty do in a department like that if they really want to get started in SOTL?

Regan: Well, so, Rebecca, let me take you a half step back.

Rebecca: Yeah.

Regan: When you say “in some disciplines it isn’t as accepted.” What has surprised me is that most disciplines have actually been doing the scholarship of teaching and learning and publishing it for the longest time. I mean, if you take a look at chemistry, it goes back, gosh, seventy years or so. Almost every discipline out there has a journal that publishes the scholarship of teaching and learning, but, and here’s the big but: most of us in our normal training never run into it. So, I’ll take my own case. In psychology, the Teaching of Psychology Journal has been around for 46 years, yet all through grad school, all through my post-doc I never even knew the journal existed. Why? Because the programs that I went through weren’t focused on teaching the individuals—wonderful as they may be—who I worked with didn’t do that kind of work, so they didn’t know about it. So I think that’s a really important fine-tune there: there is a journal in almost every discipline—almost every discipline—for the scholarship of teaching and learning. So, it’s just a question of discovering it… it’s a question of finding it. Now, that said, where can they start? I think I can answer your question from a conceptual level and from a practical level, so I’ll start with the practical. The easiest place to start, there are lots of compilations of how to do it. For example, I think both of you have my website. On my website I have a simple tab called SOTL. On that tab is a list of places to get going, and I’ve organized it so that there’s a brief introduction to SOTL, there are journals, there are resources, there are little handouts. So, if a faculty member has even ten minutes, go to my website, hit SOTL, scroll through. That’s the more practical, that’s the easiest way to get started. From a conceptual standpoint it really starts with the question, what aspect of your teaching or your student learning are you curious about? John, I know you do some work in large-class instruction in economics. Why is this assignment not working? Can I get my students to remember certain concepts better if I change how I present information? It starts with a question. And you don’t have to read anything, you don’t have to look at any manual. If you look at your class and you go, “Hmmm, why isn’t this working, or why isn’t that working?” That’s where it begins, and from there you follow the same route that we always do: go look at what’s been published in it, fine-tune your question, design, think about what do you want to change and so on and so forth. I think it’ll help if I give you my working definition of the scholarship of teaching and learning, and when I think about it I think of SOTL as encompassing those theoretical underpinnings of how we learn. And more specifically, I see it as the intentional and systematic modifications of pedagogy and here’s the important part: the assessment of the resulting changes in learning. So that’s the key: you intentionally, you systematically, modify what you’re doing and then you measure whether it worked or not. That’s it. I could say that nonchalantly. There’s a technique , there’s a robustness to it, but at the heart, where do you start? You start by asking the question.

Rebecca: I think one of the things that I hear you saying is not much different than someone has a really reflective teaching practice—they’re doing it but not in that systematic way?

Regan:Yeah, absolutely right. There’s a term called scholarly teaching and in this kind of literature there’s a distinction made between scholarly teaching and the scholarship of teaching and learning, and all the distinction is is that scholarly teacher is reflecting on their work and then you’re right, you’re absolutely right; making those intentional systemic changes. That’s scholarly teaching. When it becomes the scholarship of teaching and learning is when you present it or you publish it, preferably through peer-reviewed ways, but you’re absolutely right; at the heart of it it’s scholarly teaching. It’s reflective intentional systematic changes.

John: One of the barriers, that people who are considering doing research in the scholarship of teaching and learning, is going through IRB approval, and in many disciplines that’s something they haven’t experienced before. It’s common in psychology. It’s less common in economics and perhaps in art.

Rebecca: It doesn’t exist in design. [LAUGHTER]

John: Could you tell us a little bit about that process?

Regan: Sure. Every university has an institutional review board and essentially what that board does is it’s in place to make sure that any research that’s being done isn’t harmful. Now, normally when we think about harmful we think about a drug or a food substance being tested, but here it just means any research that’s being done, and so when you do the scholarship of teaching and learning or when you’re examining your classes, yes, you could just look at your exams and see if exam scores are changing, but, if you do want to publish that, if you do want to share that, you really should go through institutional review board review. Now, the key thing here: it does sound like this whole new world, and it is, but at the heart of it is a very simple process. Now, there are three levels of review and I think knowing about the levels helps. For example, the first level is called an exempt review. The next level is called an expedited review, and the third level is called a full board review. I don’t think I’ve run into scholarship of teaching and learning that has gone through a full board review, because we’re not doing things that are more than minimum level of stress. Now when you say, hey, hang on, I didn’t know they were stress involved. Well, anytime you ask anybody to fill out a survey, there’s a minimal level of stress. And when you’re asking your students to reflect on their learning, well that’s a minimum level of stress. Every university has its own procedure. SUNY Oswego probably has a forum online. It’s a short forum; you’re basically telling this board what you plan on doing, what you plan on doing with the information, and most importantly, in these kind of cases, you are letting the board know whether or not students will be put under duress. What the IRB is going to look for is are you the instructor in some way forcing your students to do things that normally wouldn’t be done in the normal course of the educational process. But at the heart of it, all you’re doing is you’re sharing with this board whether or not you can do it and most scholarship of teaching and learning is at that exempt level. That exempt level essentially translates to exempt from further review. It doesn’t mean exempt from being reviewed; it just means this is mundane and low stress enough that it’s exempt from further review. Now that second level, expedited. If you do want to measure or keep track of names, if you want to look at how certain names relate to scores down the line—and that’s actually some really key research—well that’s expedited review. Now, even there it’s reviewed by one person. Both the expedited and the exempt review are reviewed by one person, often the chair. It often takes no longer than a week, and by doing that you just know that all your t’s are crossed and your i’s are dotted and it’s the ethical thing to do. So, whenever people say: “Oh, this is really mundane and I’m not really doing much more than just measuring student learning,” I still sa y if there’s any chance you want to present it or publish it make sure you go through the IRB.

John: And many journals will require evidence of completion of the IRB process.

Regan: Oh, absolutely. The moment you want to publish it you have to sign off saying that you got IRB review..

John: We do use an expedited review process on our campus. I was going to say, though, that we’re recording this a bit early because we’ve recorded a few things in advance, so we’re recording this in late October, but just yesterday I read that Rice University has introduced a streamlined expedited review process or IRB and apparently that’s something that’s been happening at more and more campuses. Are you familiar with that?

Regan: You know, not as much, because right now there’s so much up in the air with the IRB because national guidelines are changing. They were supposed to have changed in January, then it was moved to July. The latest I heard is it’s moved to next January. So, for the most part actual regulations are changing. Even on our own campus we switch from one form of human subjects training to another form, but this so called short-form expedited process will definitely help. That said, even the regular expedited, it’s a very easy process and I think the neat thing about this—and I tell students this when I’m teaching research methods, too—as the instructor or the researcher, just going through that IRB form really reminds you of some key things that you may have otherwise forgotten about, so, yes.

Rebecca: Do you talk a little bit about your own research to give people an overview of what project might look like from the beginning to the end?

Regan: Sure. What really got me interested in this is I teach large introductory psychology classes, the class is 250 individuals and I was struck by how when publisher reps come into my office and try to convince me to adopt one book over the other they would talk about the pedagogical aids in the textbook; “oh, look, our book has this and our book has that.” And that really got me started studying textbooks and how students use textbooks. So the umbrella under which I do research is student studying: What’s the optimal way for students to study? …and I use both a social psychology and a cognitive psychology lens or approach to it and it really started with looking at how they use textbook pedagogical aids. So, for example, in one of my really first studies I measured which of the different aids in a textbook the student uses and then I used their usage to predict their exam scores. Now, what I found, and this is what really surprised me and got me doing this even more, is that even those students were using and focusing on key terms a lot. Now, mind you, I’ll take a half step back—you may not be surprised to know that students use bold terms, they use italics, that’s what they focus a lot on. But students in my study also said that they use key terms a lot. Now if you’re studying key terms that should be good. If you’re making flashcards and studying those key terms that should be good, but what I found is that the more students use key terms the worse their exam scores. There was this negative correlation and that’s completely counterintuitive. Why would they go the opposite direction? So, I dug into it some more and I realized that students spend so much time on key terms or so much time on flashcards that they’re not studying in any other way. So even though they’re using flashcards, they’re so intent on memorizing and surface-level processing that they’re not doing deeper level processing. So, that was some years ago and I’ve been building on that, trying to unpack how students study. My most recent study… that’s actually under review right now… a colleague, Kate Burns, and I took two of the most recommended cognitive psychology study techniques, which is repeated practice or testing yourself frequently and spacing out your practice or spacing out your studying, and we took both of these and across nine different campuses divided up classes such that the students in those classes were either using high or low levels of each of these. So, in one study across multiple campuses we tested is there a main effect of one of these types of studying or is there an interaction? And what we found is that there is an interaction and the critical component seems to be spacing out your studying. Not so much even repeating your studying, but really spacing out your studying, and I think what’s interesting here is the reason this is happening is the students who said that they were testing themselves repeatedly, that sounds great, and if you’re a cognitive psychologist you say, hey, the lab says repeat testing is great; the problem is in the classroom a lot of students who were repeatedly testing themselves were repeatedly testing themselves during a really short period of time.

John: Right, I’ve seen that myself.

Regan: And I think that’s the issue, but because we had both these factors in the study, we could actually tease that out. So that’s the kind of work that I do… is take a look at what the cognitive lab says is important; let’s see how it works in the actual classroom.

John: Now was this a controlled experiment? Or was this based on the students’ behavior?

Regan: So, yes and no, okay. [LAUGHTER] I love this study because of a number of reasons. Number one, we tested two different techniques in the same thing. Number two, we did it at multiple institutions, so it’s not just my classroom. A lot of SOTL is one class. So, here we went beyond to try and generalize. But, to get to your question, we actually used a true experimental design. So we recruited these different campuses and we assigned a classroom. So, for example, I’d say, “Hey John, thanks for taking part. If you can have your students do high repetition and high spacing?” “Hey Rebecca, thanks for taking part. Could you have your students do high repetition and low spacing?” And that’s how we spread it out. We had about two campuses in each of these cells. That’s the true experiment on paper. To get to the other part of what you said… in reality, that’s not exactly what students always did. And you know students; we can tell them to do something but a whole bunch of things gets in the way. Fortunately, of course, we measured self reports of what students said they actually did and it was relatively close to the study cells, but even though it varied a little bit we could still control for it. So, yes, it was close to a controlled study as much as you could control something in the real world across nine campuses.

John: That brings us to the general question of how you construct controls. Suppose that you make a change in your class; how do you get the counterfactual?

Regan: Right.

John: What would be some examples for people designing an experiment?

Regan: The word control, especially in research, has the true connotation of the word control group and that’s controlling for factors as different from having a control group. Optimally we’d love a control group. The problem with the control group is that it means no treatment. So, very often a true control group means this group of students is not getting something. From a philosophical and an ethical standpoint, I don’t like the notion of one group not getting something. So, the word I like to use is comparison group. So, your question still holds, but what’s the comparison group? I think here’s where if you’re fortunate enough to teach multiple sections, well one of the sections can be the comparison group. If you’re not fortunate enough to have multiple sections, you compare the students this semester with the students the last semester when you weren’t doing that new, funky innovation. So, there are a bunch of different ways to gather the comparison group, but you’re absolutely right: having a comparison group is important. Most commonly in scholarship of teaching and learning, the comparison is the students before that intervention, so it’s a classic pre- and post- measure. I’ll give you this quiz before I’ve introduced the material, I give you an equivalent quiz after, let’s see if there are changes in learning. And that’s the most common comparison; you’re comparing them with them before but optimally again you want a different section, you want a group of students, a different semester, or so on, and so on.

John: And it’s best if you have some other controls…

Regan: Absolutely.

John: for student ability and characteristics.

Regan: You nailed one of the key—my two favorite are effort and ability. As much as possible, measure their GPA. If they’re first-year students, measure their high school ACT scores or their high school GPA and then you have to measure ability, and I think those two are probably the usual suspects for control. And again, a lot of SOTL doesn’t do that and it should.

Rebecca: I think one thing that comes up a lot for me (and maybe some others who are in disciplines maybe more similar to my own) is that the kind of research that we do is not this kind of research generally, but we’re really interested in what’s happening in our classrooms. So, for faculty who might be in the arts or some other area where we’re doing really different kinds of research, how would you recommend being able to partner or do this kind of work without that background?

Regan: And I think implicit in your question is the “Do I need to have a certain methodological tool bag?” and I remember I was at a conference once and somebody accosted me and said “Hey, is it true that you have to be a social scientist to do this work?” And the answer is no, and I wrote a pretty funky essay called “Get Foxy,” which is how social scientists can benefit from the methodologies of the humanists and vice versa. But, you’re right; you can collaborate if you need to do that kind of work, but there are a lot of questions even within your discipline… and when I think about SOTL I think about answering questions about teaching and learning with the tools of your discipline. Now, I’ll give you an example: a good friend of mine was an art and her project, or something that she wanted to dig into, was to improve student critiques in an art class. Here we have students learning how to do art (and I think it was drawing or jewelry making) and across the course of the semester everybody had to present their work and then critique each other’s work… and those critiques, they just didn’t have the teeth that she wanted them to, so she was giving them skills and how to do it. So here’s a case of how did she know whether or not the critiquing tools were increasing? Well, she came up with a simple rubric and to score them against and look at if the scores changed. Now, you may say, well, we very often in the arts and theater you don’t get skills to do that, which is true, but that’s where I think collaboration comes in and that’s why what’s really neat about scholarship of teaching and learning is very often there are class collaborations. I have a historian on my campus who wanted to change the quality of his essays and he and David Voelker changed how he was teaching and wanted to see it roll out and had students on their essays use teams in a different way. Well, he compared, and John this goes back to your point, he compared essays from before the change with essays from after the change, counted up the number of teams students had and then, Rebecca, to your point went over to my colleague in psychology and said, hey, can you tell me if this is statistically different. So, he didn’t even bother with doing the stats; he just said, “Hey look, I don’t need to do the stats.” But you can, in a click, and literally within minutes my colleague in psychology had done the stats for him. I think that’s the kind of stuff that can happen to truly get at those answers if you go, “You know, I don’t know how to do that.” But, you’d be surprised… the basic skills for SOTL can give you enough to test questions pretty well.

Rebecca: I think John and I have also found in the teaching center that it’s really exciting when faculty from different disciplines start talking about their research when they’re looking at learning because there’s things that we can learn from each other and the more that we’re talking across disciplines can be really valuable as well.

Regan: Right, and I think this is where reading the rich literature that exists in your discipline or even across disciplines on scholarship on teaching and learning really gives you the leg up, because I find now when I do workshops and somebody says, “You know, I’ve got this question; I don’t know how to start.” More often than not it’ll remind me of a study that I can say, hey, here’s what you can do. And it’s just because I read a lot and I’ve got all that in my head and I just matched to that question and it’s pretty easy. I mean, very rarely do we have to invent something from scratch. We go, “Hey, yeah, you know what? Here’s the study that’s pretty close to the question you have, let’s use that methodology.”

Rebecca: So, how do we build a culture of the scholarship of teaching and learning—the departments who might have faculty who are resistant to the idea of their colleagues spending their time doing that? How do we start changing minds and really building a culture that embraces the idea of the scholarship of teaching and learning?

Regan: Well, I think you’ve got to attack it from two different levels. You definitely want a champion in the administration who is educated enough about the scholarship of teaching and learning and how it can be done robustly. If you can convince somebody of it’s worth and then if you go “How do you do that?” …well that’s where you need to make sure you have at your fingertips, as a teaching and learning center, the exemplars of really robust work… and I think if you have that really robust work at your fingertips, that’s definitely a key place to start. One of my favorite examples along those lines of trying to convince (especially administrators) about the worth of scholarship of teaching and learning, I recommend a 2011 publication by Hutchings, Huber, and Ciccone, it’s called A Scholarship of Teaching and Learning Reconsidered and this 2011 publication is a great collection. It does your homework for you. That one book pulls together evidence for why scholarship of teaching and learning helps students, helps faculty, helps institutions. So that’s where the top down—get your administrators to check that book out and go, “Oh yeah, look, there is actually some good research.” Coming at it from the other angle—I know this for a fact—there are people on your campus doing some of that work, but often they may be isolated, they may be a small group. You want to strengthen them so that they can spread that to their circles, and that’s really how it starts. On my campus, when Scott was the Dean at Green Bay, we did a lot to develop scholarship of teaching and learning through the teaching center. There was one year where we had 14 faculty who got together every month and talked about their projects. Now you may say, well, that’s 14 and you had 160 faculty. You know what, you do 10 of working every year and colleagues see the value of the work those 10 or 14 are doing, pretty soon you’re gonna have a culture where people recognize it more and appreciate it more. So I think that’s how it goes… you put your efforts on those people who are already doing it to make them stronger and that’s gonna spill over and pretty soon you’re gonna win over folks.

John: We generally had support from the upper administration and there’s often been a lot of faculty who are new, interested in doing it; it’s usually the promotions and tenure committees that have served as a barrier in some departments, but we’ll work on that and we need to keep working on that.

Regan: Well, just along those lines on our campus we felt so strongly about the scholarship of teaching and learning that the Faculty Senate actually passed a resolution recognizing the importance of scholarship of teaching and learning. Now again, it still gave department chairs some leeway, but at least the faculty voted on it as something that the university values and that goes a really long way to having especially junior faculty say, you know, I can do this.

Rebecca: Certainly makes faculty, especially junior faculty, feel supported when the Senate is saying, “Yes, we believe in this” and it’s not just one person saying we don’t.

Regan: Absolutely. And they’ll be naysayers. We started off this conversation with “There are people out there who think it’s not good enough” and there are people out there but I’ve had conversations with such people on my campus where sharing some information, sharing things about how it’s done goes a long way towards changing minds.

John: In my department, it’s helped that I’ve been the chair of our search committee for a few decades now. We’ve generally hired people who are interested in this, but that’s not the case in all of our departments yet, but we’re hoping that’ll change. For those who have small classes or may not be interested in doing research in their own classes, one other option is meta-analysis. Could you talk a little bit about that?

Regan: So meta-analysis, where one study is taking a look at a lot of different studies, there is the mother of all meta analyses… is one that we should talk about because I think the interested person can run to it. John Hattie, now at the University of Melbourne, did a meta-analysis where actually he did a meta-meta-analysis; took 900 meta analyses and then synthesized the data from those 900 studies that had already synthesized data, and the reason I like talking about that is the sample size when you take all those 900 meta analyses is a quarter of a billion with a “b”; that’s a lot of data points, it’s a lot of students. And what’s neat about meta analyses is that instead of just being one study at one place it’s now multiple studies over multiple contexts, and if you can find an effect over multiple contexts, that’s really saying something because a lot of single studies are so geared into the local context of where that place is that if you run into a meta analysis, so even if anybody listening pulls up an educational journal or an SOTL journal and sees meta analysis in the title, I would spend more time reading that one because it’s gonna be more likely to generalize from that. So, I think it’s statistical and methodological advances now mean that there are more meta analyses around and more meta, meta analyses around as well.

Rebecca: As an advocate for the scholarship of teaching and learning, where do you hope the scholarship of teaching and learning goes in the next five years?

Regan: Honestly, I think it should be a part of every teacher’s repertoire. When I think about a model teacher, and it’s not just when I think about it—I’ve published on evidence-based college and university teaching and when my co-authors and I looked at all the evidence out there and what makes a successful university teacher… one of those components, and we found six… I mean, it wasn’t just student evaluations, no, it was your syllabi, it was your course design, but one big element was doing the scholarship of teaching and learning… and to answer your question, I think if in five years from now we can see it be part of teacher training to look at your class with that intentional systematic lens, I think that’s where the field needs to get to.

John: At the very least it would get people to start considering evidence-based teaching practices instead of just replicating whatever was done to them in graduate school.

Regan: Absolutely. People would be surprised at how much good SOTL there is out there, and I always like sending folks to the Kennesaw State Center for Teaching and Learning where they have a list of journals in SOTL in essentially every field. You will scroll through that list for ages and it is just mind-boggling to realize that, “Wow, SOTL has been going on for a very long time.” And Rebecca, you mentioned art and performance arts and theater and music… not as much, but even there there is a fair amount and I think it’s just a question of getting folks making those resources more available to individuals and that’s why whenever I interact with teaching and learning centers I have a short list of key resources to look at. And again, that’s on my SOTL link. But, even that small list is an eye-opener to most people who never knew this existed, and I think once they realize it’s there they will start seeing it everywhere and once you start doing it it really energizes you. For those of us who’ve been teaching for 20-plus years to look at our classes with that new eye of how can I change something, how can I make it better and then seeing the positive effects of those changes, that’s invigorating.

Rebecca: I’m energized after having this conversation.

Regan: It is good stuff.

Rebecca: Yeah.

Regan: I just got back from a three-day conference and all we did was sit around and talk about cool SOTL. And you’re right …came back and sitting on the plane I was texting people with study ideas to collaborate on. It was that exciting.

Rebecca: The more you talk… collaborate… the more it happens.

Regan: There you go.

Rebecca: So, we always wrap up by asking, what’s next?

Regan: You know, I think I like getting the bang for my buck and you mentioned this in the intro: right now I’m working on the American Psych Association’s Introductory Psychology Initiative and what’s next is basically two years of really focusing on the introductory psychology course. It’s taken by close to a 1.5 million students a year and I’d like to make sure we can make that course the best learning experience for our students as possible, so that’s where my energy is gonna be for the next little bit.

John: That’s a big task and a very useful one.

Rebecca: And definitely worthwhile. Well, thank you so much for spending some time with us this afternoon. it’s been eye-opening and exciting… energizing. I can’t wait to look through some of the resources.

Regan: You know, is there anything else that you’d like, get in touch and I welcome anybody listening to get in touch as well.

John: Thank you, and we’ll share links to the resources you mentioned in the show notes.

Regan: Sounds good.

[MUSIC]

John: If you’ve enjoyed this podcast please subscribe and leave review on iTunes or your favorite podcast service. To continue the conversation join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John: Editing assistance provided by Kim Fischer, Brittany Jones, Gabriella Perez, Joseph Santarelli-Hansen, and Dante Perez.

[MUSIC]

49. Closing the performance gap

Sometimes, as faculty, we are quick to assume that performance gaps in our courses are due to the level of preparedness of students rather than what we do or do not do in our departments. In this episode, Dr. Angela Bauer, the chair of the Biology Department at High Point University, joins us to discuss how community building activities and growth mindset messaging combined with active learning strategies can help close the gap.

Show Notes

  • “Success for all Students: TOSS workshops” – Inside UW-Green Bay News (This includes a short video clip in which Dr. Bauer describes TOSS workshops)
  • Dweck, C. S. (2008). Mindset: The new psychology of success. Random House Digital, Inc.
  • Barkley, E. F., Cross, K. P., & Major, C. H. (2014). Collaborative learning techniques: A handbook for college faculty. John Wiley & Sons.
  • Life Sciences Education
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of personality and social psychology, 69(5), 797.
  • Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American psychologist, 52(6), 613.
  • The Teaching Lab Podcast – Angela Bauer’s new podcast series. (Coming soon to iTunes and other podcast services)

Transcript

Coming Soon!

37. Evidence is Trending

Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode, Michelle Miller joins us to discuss recent research that will likely shape the future of higher education.

Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.

Show Notes

Rebecca: Faculty are increasingly looking to research on teaching and learning to make informed decisions about their practice as a teacher and the policies their institutions put into place. In today’s episode we talk to a cognitive psychologist about recent research that will likely shape the future of higher education.
[Music]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[Music]

John: Our guest today is Michelle Miller. Michelle is Director of the First-Year Learning Initiative, Professor of Psychological Sciences, and President’s Distinguished Teaching Fellow at Northern Arizona University. Dr. Miller’s academic background is in cognitive psychology. Her research interests include memory, attention, and student success in the early college career. She co-created the First-Year Learning Initiative at Northern Arizona University and is active in course redesign, serving as a redesign scholar for the National Center for Academic Transformation. She is the author of Minds Online: Teaching Effectively with Technology and has written about evidence-based pedagogy in scholarly as well as general interest publications.
Welcome, Michelle!

Michelle: Hi, I’m so glad to be here.

Rebecca: Thanks for joining us.
Today’s teas are:

Michelle: I’m drinking a fresh peppermint infused tea, and it’s my favorite afternoon pick-me-up.

Rebecca: …and it looks like it’s in a really wonderfully designed teapot.

Michelle: Well, thank you… and this is a thrift store find… one of my favorite things to do. Yeah, so I’m enjoying it.

John: I have Twinings Blackcurrant Breeze.

Rebecca: …and I’m drinking chai today.

Michelle: Pretty rough.

John: We invited you here to talk a little bit about things that you’ve been observing in terms of what’s catching on in higher education in terms of new and interesting innovations in teaching.

Michelle: Right, that’s one of things that I really had the luxury of being able to step back and look at over this last semester and over this last spring when I was on sabbatical… One of the really neat things about my book Minds Online, especially now that it’s been out for a few years, is that it does open up all these opportunities to speak with really engaged faculty and others, such as: instructional designers, librarians, academic leadership, educational technology coordinators… all these individuals around the country who are really, really involved in these issues. It’s a great opportunity to see how these trends, how these ideas, how these innovations are rolling out, and these can be some things that have been around for quite some time and just continue to rock along and even pickup steam, and some newer things that are on the horizon.

John: You’ve been doing quite a bit of traveling. You just got back from China recently, I believe.

Michelle: I sure did. It was a short visit and I do hope to go back, both to keep getting involved in educational innovations there and, hopefully, as a tourist as well. So, I was not there for very long but I had the opportunity to speak at Tsinghua University in Beijing, which is a really dynamic institution that’s been around for about a hundred years. For a while in its history it specialized in things like engineering education polytechnic, but now it’s really a selective comprehensive university with very vibrant graduate and undergraduate programs that are really very relatable for those of us in the United States working in similar contexts. My invitation was to be one of the featured speakers at the Future Education, Future Learning Conference, which was a very interdisciplinary gathering of doctoral students, faculty, even others from the community, who were all interested in the intersection of things like technology, online learning, MOOCs even, and educational research (including research into the brain and cognitive psychology), and bringing all of those together… and it was a multilingual conference. I do not speak Chinese but much of the conference was in both English and Chinese and so I was also able to really absorb a lot of these new ideas. So yes, that was a real highlight of my sabbatical semester and one that I’m going to be thinking about for quite some time.

I should say that part of what tied in there as well is that Minds Online, I’ve just learned, is going to be translated into Chinese and that’s going to come out in May 2019. So, I also got to meet with some of the people who were involved in the translation… start to put together some promotional materials such as videos and things like that.

Rebecca: Cool.

John: Excellent.

Rebecca: So, you’ve had a good opportunity, as you’ve been traveling, to almost do a scavenger hunt of what faculty are doing with evidence-based practices related to your book. Can you share some of what you’ve found or heard?

Michelle: This theme of evidence-based practice, and really tying into the findings that have been coming out of cognitive psychology for quite some time, that really is one of the exciting trends and things that I was really excited to see and hear for so many different quarters I visited in different institutions… and so I would say definitely, this is a trend that is continuing and is increasing. There really does continue to be a lot of wonderful interest and wonderful activity around these real cognitively informed approaches to teaching, and what I think we could call scientifically based and evidence-based strategies. One form this has taken is Josh Eyler’s new book, called How Humans Learn: The Science and Stories behind Effective College Teaching. This is a brand new book by a faculty development professional, and a person coming out of the humanities, actually, who’s weaving together even from his humanities background everything from evolutionary biology to classical research in early childhood education to the latest brain-based research. He’s weaving this together into this new book for faculty. So, that’s one of the things that I’ve noticed and then there’s the issue which i think is another great illustration of best-known practice which is the testing effect and retrieval practice.

John: One of the nice things is how so many branches of research are converging… testing in the classroom, brain-based research, and so forth, are all finding those same basic effects. It’s nice to see such robust results, which we don’t always see in all research in all disciplines.

Rebecca: …and just breaking down the silos in general. The things are all related and finding out what those relationships are… exploring those relationships… is really important and it’s nice to see that it’s starting to open up.

John: We should also note that when you visited here, we had a reading group and we had faculty working on trying to apply some of these concepts, and they’re still doing that… and they still keep making references back to your visit. So, it’s had quite a big impact on our campus.

Michelle: This wasn’t true, I don’t think, when I first entered the teaching profession… and even to the extent when I first started getting interested in applied work in course redesign and in faculty professional development. you would get kind of this pushback or just strange looks when you said “Oh, how about we bring in something from cognitive psychology” and now that is just highly normalized and something that people are really speaking across the curriculum… and taking it and running with it in a lasting ongoing way, not just as a “Oh, well that was an interesting idea. I’m going to keep doing what I’m doing” but really people making some deep changes as you mentioned. This theme of breaking down silos… I mean I think if there’s kind of one umbrella trend that all of these things fits under it’s that breakdown of boundaries. So, that’s one that I keep coming back to, I know, in my work.

So, the idea of retrieval practice, drilling down on that one key finding which goes back a very long ways in cognitive psychology. I think of that as such a good example of what we’re talking about here… about how this very detailed effect in cognition and yet it does have these applications across disciplinary silos. Now when I go to conferences and I say “Okay, raise your hand. How many people have ever heard of retrieval practice? How many people have ever heard of the testing effect? How many people have heard of the book Make it Stick (which really places this phenomena at its center)?” and I’m seeing more hands raising.

With retrieval practice, by the way, we’re talking about that principle that taking a test on something, that retrieving something from memory actively, has this huge impact on future memorability of that information. As its proponents like to say, tests are not neutral from a memory or from a learning standpoint… and while some of the research has focused on very kind of stripped-down laboratory style tasks like memorizing words pairs, there are also some other research projects showing that it does flow out to more realistic learning situations.

So, more people simply know about this, and that’s really the first hurdle, oftentimes, with getting this involved disciplinary sometimes jargon riddled research out there to practitioners and getting it into their hands. So, people heard of it and they’re starting to build this into their teaching. As I’ve traveled around I love to hear some of the specific examples and to see it as well crop up in scholarship of teaching and learning.

Just recently, for example, I ran across and really got into the work of Bruce Kirchhoff who is at University of North Carolina – Greensboro and his area is botany and plant identification. He has actually put together some different really technology-based apps and tools that students and teachers can use in something like a botany course to rehearse and review plant identification. He says in one of his articles, for example, that there just isn’t time in class to really adequately master plant identification. It’s just too complex of a perceptual and cognitive and memory test to do that. So, he really built in from the get-go very specific principles drawn from cognitive psychology… so, the testing effect is in there… there’s different varieties of quizzing and it all is about just getting students to retrieve and identify example after example. It brings in also principles such as interleaving, which we could return to in a little bit, but has to do with the sequencing of different examples… their spacing… So, that’s even planned out exactly how and when students encounter different things that they’re studying. It’s really wonderful. So, for example he and his colleagues put out a scholarship of teaching and learning article talking about how this approach was used effectively in veterinary medicine students who have to learn to identify poisonous plants that they’ll see around their practice. This is something that can be time-consuming and very tough, but they have some good data showing that this technology enhanced cognitively based approach really does work. That’s one example. Coincidentally, I’ve seen some other work in the literature, also on plant identification, where the instructors tagged plants in an arboretum… they went around and tagged them with QR codes… that students can walk up to a plant in the real environment with an iPad… hold the iPad over it… and it would immediately start producing quiz questions that were are specific to exactly the plants they were looking at.
So, those are some of the exciting things that people are taking and running with now that this principle is out there.

Rebecca: What I really love about the two stories that you just shared was the faculty are really designing their curriculum and designing the learning experiences with the students in mind… and what students need and when they need it. So, not only is it employing these cognitive science principles, but it’s actually applying design principles as well. It’s really designing for a user experience and thinking about the idea that if I need to identify a plant, being able to identify it in this situation in which I would need to identify it in makes it much more dynamic I think for a student… but also really meets them where they’re at and where they need it.

John: …and there’s so many apps out there now that will do the plant identification just from imagery without the QR code, that I can see it taking it one step further where they can do it in the wild without having that… so they can build it in for plants that are in the region without needing to encode that specifically for the application.

Michelle: I think you’re absolutely right once we put the technology in the hands of faculties who, as I said, they’re the one to know: “Where are my students at? Where are the weak points? Where are the gaps that they really need to bridge?” and that’s where their creativity is giving rise to all these new applications… and sometimes these can be low-tech as well… or also things that we can put in a face-to-face environment… and I’d like to to share just some experiences that I’ve had with this over the last few semesters.

In addition to trying to teach online with a lot of technology, I also have in my teaching rotation a small required course in research methods in psychology which can be a real stumbling block… the big challenge course… it’s kind of a gateway course to continued progress in our major. So, in this research methods course, some of the things that I’ve done around assessment and testing to really try again to stretch that retrieval practice idea… to make assessments really a more dynamic part of the course and more central part of the course… to move away from that idea that tests are just this kind of every now and again this panic mode opportunity for me to kind of measure in sorts of students and judge them… to make good on that idea that tests are part of learning. So, here’s some of the things that I try to do. For one thing, I took time out of the class almost every single class meeting as part of the routine to have students first of all generate quiz questions out of their textbook. So, we do have a certain amount of foundational material in that course as well as a project and a whole lot of other stuff is going on. So they need to get that foundational stuff.

Every Tuesday they would come in and they knew their routine: you get index cards and you crack your textbook and you generate for me three quiz questions. Everybody does it. I’m not policing whether you read the chapter or not. It’s active… they’re generating it… and also that makes it something like frequent quizzing. That’s a great practical advantage for me since I’m not writing everything. They would turn those in and I would select some of my favorites I would turn those into a traditional looking paper quiz and hand that out on Thursday. I said “Hey, take this like a realistic quiz.” I had explained to them that quizzes can really boost their learning, so that was the justification for spending time on it and then I said: “You know what? I’m not going to grade it either. You take it home because this is a learning experience for you. It’s a learning activity.” so we did that every single week as those students got into that routine.

The second thing that I did to really re-envision how assessment testing and quizzing worked in this particular course, was something inspired by different kinds of group testing and exam wrapper activities I’ve seen, particularly coming out of the STEM field, where there’s been a lot of innovation in this area. What I would do is… we had these high stakes exams at a few points during the semester. But, the class day after the exam, we didn’t do the traditional “Let’s go over the exam.” [LAUGHTER] That’s kind of deadly dull, and it just tends to generate a lot of pushback from students… and as we know from the research, simply reviewing… passing your eyes over the information… is not going to do much to advance your learning. So, what I would do is… I would photocopy all those exams, so it has a secure copy. They were not graded. I would not look at this before we did this… and I would pass everybody’s exams back to them along with a blank copy of that same exam. I assigned them to small groups and I said “Okay, here’s your job. Go back over this exam, fill it out as perfectly as you can as a group, and to make it interesting I said I will grade that exam as well, the one you do with your group, and anything you get over 90% gets added to everybody’s grade. This time it was open book, it was open Google, it was everything except you can’t ask me questions. So, you have each other and that’s where these great conversations started to happen. The things that we always want students to say. So, I would eavesdrop and hear students say “Oh, well you know what, I think on this question she was really talking about validity because reliability is this other thing…” and they’d have a deep conversation about it. I’m still kind of going back through the numbers to see what are the impacts of learning? Are there any trends that I can identify? But, I will say this: in the semesters that I did this, I didn’t have a single question ever come back to me along the lines of “Well, this question was unclear. I didn’t understand it. I think I was graded unfairly.” it really did shut all that down and again extended the learning that I feel students got out of that. Now it meant a big sacrifice of class time, but I feel strongly enough about these principles that I’m always going to do this in one form or another anytime I can can in face-to-face classes.

Rebecca: This sounds really familiar, John.

John: I’ve just done the same, or something remarkably similar, this semester, in my econometrics class which is very similar to the psych research methods class. I actually picked it up following a discussion with Doug McKee. He actually was doing it this semester too. He had a podcast episode on it. It sounded so exciting, I did something… a little bit different. I actually graded it but I didn’t give it back to them because I wanted to see what they had the most trouble with, and then I was going to have them only answer the ones in a group that they struggled with… and it turned out that that was pretty much all them anyway. So, it’s very similar to what you did except I gave them a weighted average of their original grade and the group grade and all except one person improved and the one person’s score went down by two points because the group grade was just slightly lower… but he did extremely well and he wasn’t that confident. The benefits to them of that peer explanation and explaining was just tremendous and it was so much more fun for them and for me and, as you said, it just completely wiped out all those things like “Well, that was tricky” because when they hear their peers explaining it to them the students were much more likely to respond by saying “Oh yeah, I remember that now” and it was a wonderful experience and I’m gonna do that everywhere I can.

In fact. I was talking about it with my TA just this morning here at Duke and we’re planning to do something like that in our classes here at TIP this summer, which i think is somewhat familiar to you from earlier in your academic career.

Michelle: That is right we do have this connection. I was among, not the very first year, but I believe the second cohort of Talent Identification Program students who came in, I guess you would call it now, middle school (back then, it was called junior high) and what a life-transforming experience. We’ve had even more opportunities to talk about the development of all these educational ideas through that experience.

John: That two-stage exam is wonderful and it’s so much more positive… because it didn’t really take, in my class, much more time, because I would have spent most of that class period going over the exam and problems they had. But the students who did well would have been bored and not paying much attention to it; the students who did poorly would just be depressed and upset that they did so poorly… and here, they were actively processing the information and it was so positive.

Michelle: That’s a big shift. We really have to step back and acknowledge that, I think. that is a huge shift in how we look at assessment, and how we think about the use of class time… and it’s not just “Oh my gosh, I have to use every minute to put such content in front of the students.” Just the fact that more of us are making that leap, I think, really is evidence this progress is happening… and we see also a lot of raised consciousness around issues such as learning styles. That’s another one that, when I go out and speak to faculty audiences, 10 years ago you would get these shocked looks or even very indignant commentary when you say “Ok, this idea of learning styles, in the sense that say there are visual learners, auditory learners, what I call sensory learning styles (VAK is another name it sometimes goes by). The idea that that just holds no water from a cognitive point of view…” People were not good with that, and now when I mentioned that at a conference, I get the knowing nods and even a few groans… people like “Oh, yeah. we get that. Now, K-12, which I want to acknowledge it’s not my area, but I’m constantly reminded by people across the spectrum that it’s a very different story in K-12. So, setting that aside… but this is what I’m seeing… that faculty are realizing… they’re saying “Oh, this is what the evidence says…” and maybe they even take the time to look at some of the really great thinkers and writers who put together the facts on this. They say “You know what? I’m not going to take my limited time and resources and spend that on this matching to styles when the styles can’t even be accurately diagnosed and are of no use in a learning situation. So, that’s another area of real progress.

Rebecca: What I am hearing is not just progress here in terms of cognitive science, but a real shift towards really thinking about how students learn and designing for that rather than something that would sound more like a penalty for grade like “Oh, did you achieve? Yes or no…” but, rather here’s an opportunity if you didn’t achieve to now actually learn it… and recognize that you haven’t learned it, even though it might seem really familiar.

John: Going back to that point about learning styles. It is spreading in colleges. I wish it was true at all the departments at our institution, but it’s getting there gradually… and whenever people bring it up, we generally remind them that there’s a whole body of research on this and I’ll send them references but what’s really troubling is in my classes the last couple years now, I’ve been using this metacognitive cafe discussion forum to focus on student learning… and one of the week’s discussions is on learning styles and generally about 95 percent of the students who are freshmen or sophomores (typically) come in with a strong belief in learning styles… where they’ve been tested multiple times in elementary or middle school… they’ve been told what their learning styles are… they’ve been told they can only learn that way… It discourages them from trying to learn in other ways and it does a lot of damage… and I hope we eventually reach out further so that it just goes away throughout the educational system.

Rebecca: You’ve worked in your classes, Michelle, haven’t you to help students understand the science of learning and use that to help students understand the methods and things that you’re doing>

Michelle: Yes, I have. I’ve done this in a couple of different ways. Now, partly, I get a little bit of a free pass in some of my teaching because I’m teaching the introduction to psychology or I’m teaching research methods where I just happen to sneak in as the research example will be some work on say attention or distraction or the testing effect. So, I get to do it in those ways covertly. I’ve also had the chance, although it’s not on my current teaching rotation… I’ve had the chance to also take it on as in freestanding courses. As many institutions are doing these days… it’s another trend… and what Northern Arizona University, where I work, has different kinds of freshmen or first-year student offering for courses they can take, not in a specific disciplinary area, but that really crossed some different areas of the student success or even wellbeing. So, I taught a class for awhile called Maximizing Brain Power that was about a lot of these different topics. Not just the kind of very generic study skills tip… “get a good night’s sleep…” that kind of thing… but really some again more evidence-based things that we can tell students and you can really kind of market it… and I think that we do sometimes have to play marketers to say “Hey, I’m going to give you some inside information here. This is sort of gonna be your secret weapon. So, let me tell you what the research has found.”

So, those are some of the things that I share with students… as well as when the right moment arises, say after an exam or before their first round of small stakes assessments, where they’re taking a lot of quizzes… to really explain the difference between this and high stakes or standardized tests they may have taken in the past. So, I do it on a continuing basis. I try to weave it into the disciplinary aspect and I do it in these free-standing ways as well… and I think here’s another area where I’m seeing this take hold in some different places… which is to have these free-standing resources that also just live outside of a traditional class that people can even incorporate into their courses… if say cognitive psychology or learning science isn’t their area… that they can bring in, because faculty really do care about these things. We just don’t always have the means to bring them in in as many ways as we would like.

John: …and your Attention Matters project was an example of that wasn’t it? Could you tell us a little bit about that?

Michelle: Oh, I’d love to… and you know this connects to what it seems to be kind of an evergreen topic in the teaching and learning community these days, which is the role of distracted students… and I know this past year there just have been these one op-ed versus another. There’s been some really good blog posts by some people I really like to follow in the teaching and learning community such as Kevin Gannon talking about “Okay, do you have laptops in the classroom? and what happens when you do?” and so I don’t think that this is just a fad that’s going away. This is something that the people do continue to care about, and this is where the attention matters project comes in.

This was something that we conceptualized and put together a couple years ago at Northern Arizona University with myself, and primarily I collaborated with a wonderful instructional designer who also teaches a great deal… John Doherty. So, how this came about is I was seeing all the information on distraction… I’m really getting into this as a cognitive psychologist and going “Wow, students need to know that if they’re texting five friends and watching a video in their class. It’s not going to happen for them.” I was really concerned about “What can I actually do to change students minds?” So, my way of doing this was to go around giving guests presentations in every classes where people would let me burn an hour of their class time… and not a very scalable model… and John Doherty respectfully sat through one my presentations on this and then he approached me and said “Look, you know, we could make a module and put this online… and it could be an open access within the institution module, so that anybody at my school can just click in and they’re signed up. We could put this together. We could use some really great instructional design principles and we could just see what happens… and I bet more people would take that if it were done in that format. We did this with no resources. We just were passionate about the project and that’s what we did. We had no grant backing or anything. We got behind it. So, what this is is about a one- to two-hour module that, it’s a lot like a MOOC in that it there’s not a whole lot of interaction or feedback, but there are discussion forums and it’s very self-paced in that way… so one- to two-hour mini MOOCs that really puts at the forefront demonstrations and activities… so we don’t try to convince students about problems with distraction and multitasking… we don’t try to address that just by laying a bunch of research articles on them… I think that’s great if this were a psychology course, but it’s not. So, we come at it by linking them out to videos, for example, that we were able to choose, that we feel really demonstrate in some memorable ways what gets by us when we aren’t paying attention… and we also give students some research-based tips on how to set a behavioral plan and stick to it… because just like with so many areas of life, just knowing that something is bad for you is not enough to really change your behavior and get you not to do that thing. so we have students talking about their own plans and what they do when, say, they’re having a boring moment in class, or they’re really really tempted to go online while they’re doing homework at home. What kinds of resolutions can they set or what kind of conditions can make that that will help them accomplish that. Things like the software blockers… you set a timer on your computer and it can lock you out of problematic sites… or we learned about a great app called Pocket Points where you actually earn spendable coupon points for keeping your phone off during certain hours. This is students talking to students about things that really concern them and really concern us all because I think a lot of us struggle with that.

So, we try to do that… and the bigger frame for this as well is this is, I feel, a life skill for the 21st century… thinking about how technology is going to be an asset to you and not detract from what you accomplish in your life. What a great time to be reflecting on that, when you’re in this early college career. so that’s what we try to do with the project…and we’ve had over a thousand students come through. They oftentimes earn extra credit. Our faculty are great about offering small amounts of extra credit for completing this and we’re just starting to roll out some research showing some of the impacts… and showing it in a bigger way just how you can go about setting up something like this.

Rebecca: I like that the focus seems to be on helping students with a life skill rather than using technology is just a blame or an excuse. We’re in control of our own behaviors and taking ownership over our behaviors is important rather than just kind of object blaming.

Michelle: So, looking at future trends, I would like to see more faculty looking at it in the way that you just described, Rebecca, as this is a life skill and it’s something that we collaborate on with our students… not lay down the law… because, after all, students are in online environments where we’re not there policing that and they do need to go out into work environments and further study and things like that. So, that’s what I feel is the best value. For faculty who are looking at this, if they don’t want to do… or don’t have the means to do something really formal like our Attention Matters approach, just thinking about it ahead of time… I think nobody can afford to ignore this issue anymore and whether you go the route of “No tech in my classroom” or “We’re going to use the technology in my classroom“ or something in between… just reading over, in a very mindful way, not just the opinion pieces, but hopefully also a bit of the research, I think, can help faculty as they go in to deal with this… and really to look at it in another way, just to be honest, we also have to consider how much of this is driven by our egos as teachers and how much of it is driven by a real concern for student learning and those student life skills. I think that’s where we can really take this on effectively and make some progress when we are de-emphasizing that ego aspect and making sure that it really is about the students.

John: We should note there’s a really nice chapter in this book called Minds Online: Teaching Effectively with Technology that deals with these types of issues. It was one of the chapters that got our faculty particularly interested in these issues… on to what extent technology should be used in the classroom… and to what extent it serves as a distraction.

Michelle: I think that really speaks to another thing which I think is an enduring trend… which is the emphasis on really supporting the whole student in success and what we’ve come to call academic persistence… kind of a big umbrella term that has to do with, not just succeeding in a given class, but also being retained… coming back after the first year. As many leaders in higher education point out, this is as a financial issue. As someone pointed out, it does cost a lot less to hang on to the students you have instead of recruiting more students to replace ones who are lost. This is, of course, yet another really big shift in mindset of our own, because after all we did used to measure our success by “Hey, I flunked this many students out of this course” or” Look at how many people have to switch into different majors…our major is so challenging…”

So, we really have turned that thinking around and this does include faculty now. I think that we did used to see those silos. We had that very narrow view of “I’m here to convey content. I’m here to be an expert in this discipline, and that’s what I’m gonna do…” and sure, we want to think about things like do students have learning skills? Do they have metacognition? Are they happy and socially connected at the school? Are they likely to be retained so that we can have this robust university environment?

We had people for that, right? It used to be somebody else’s job… student services or upper administration. They were the ones who heard about that and now I think on both sides we really are changing our vision. More and more forward-thinking faculty are saying “You know what? Besides being a disciplinary expert, I want to become at least conversant with learning science. I want to become at least conversant with the science of academic persistence…” There is a robust early literature on this and that’s something that we’ve been working on at NAU over this past year as well… kind of an exciting newer project that I like very much. We’ve started to engage faculty in a new faculty development program called Persistence Scholars and this is there to really speak to people’s academic and evidence-based side, as well as get them to engage in some perspective-taking around things like the challenges that students face and what it is like to be a student at our institution. We do some really selected readings in the area we look at things like mindset… belongingness… these are really hot areas in that science of persistence… in that emerging field. But, we have to look at it in a really integrated way.

It’s easy for people to say just go to a workshop on mindset and that’s a nice concept, but we wanted to think about it in this bigger picture… really know what are some of the strengths of that and why? Where do these concepts come from? What’s the evidence? That’s something that I think is another real trend and I think as well we will see more academic leaders and people in staff and support roles all over universities needing to know more about learning science. There are still some misconceptions that persist, as we’ve talked about. We’re making progress in getting rid of some of these myths around learning, but I will say… I’m not gonna name any names… but, every now and again I will hear from somebody who says “Oh well, we need to match student learning styles” or “Digital natives think differently, don’t you know?” and I have to wonder whether that’s a great thing. I mean, these are oftentimes individuals that have the power to set the agenda for learning all over a campus. Faculty need to be in the retention arena and I think that leaders need to be in the learning science arena. The boundaries is breaking down and it’s about time.

Rebecca: One of the things that I thought was really exciting with the reading groups that we’ve been having on our campus… that we started with your book, but then we’ve read Make it Stick and Small Teaching since… is that a lot of administrators in a lot of different kinds of roles engaged with us in those reading groups, it wasn’t just faculty. There was a mix of faculty, staff, and some administrators, and I think that that was really exciting. For people who don’t have the luxury of being in your persistence scholar program, what would you recommend they read to get started to learn more about the science of persistence?

Michelle: I really, even after working with this for quite some time, I loved the core text that we have in that program, which is Completing College by Vincent Tinto. It’s just got a great combination of passionate and very direct writing style. So, there’s no ambiguity, there’s not a whole lot of “on the one hand this and on the other hand that.” It’s got an absolutely stellar research base, which faculty of course appreciate… and it has a great deal of concrete examples. So, in that book they talk about “okay, what does it mean to give really good support to first semester college students? What does that look like?” and they’ll go out and they’ll cite very specific “Here’s a school and here’s what they’re doing… here’s what their program looks like… here’s another example that looks very different but gets at the same thing.” So, that’s one of the things that really speak to our faculty… that they really appreciated and enjoyed.

I think that as well we tested good feedback about work that’s come out of the David Yeager and his research group on belongingness and lay theories, and lay theories is maybe a counterintuitive term for kind of a body of ideas about what students believe about academic success and why some people are successful and others are not and how those beliefs can be changed sometimes through relatively simple interventions and when it happens we see great effects such as the narrowing of achievement gaps among students who have more privilege or less privileged backgrounds… and that’s something that, philosophically, many faculty really really care about but they’ve never had the chance to really learn “Okay, how can I actually address something like that with what I’m doing in my classroom, and how can I really know that the things that I’m choosing do have that great evidence base…”

John: …and I think that whole issue is more important now and is very much a social justice issue because, with the rate of increase we’ve seen in college cost inflation, people who start college and don’t finish it are saddled with an awfully high burden of debt. The rate of return to a college degree is the highest that we’ve ever seen and college graduates end up not only getting paid a lot more but they end up with more comfortable jobs and so forth… and if we really want to move people out of poverty and try to reduce income inequality, getting more people into higher education and successfully completing higher education is a really important issue. I’m glad to see that your institution is doing this so heavily and I know a lot of SUNY schools have been hiring Student Success specialists. At our institution they’ve been very actively involved in the reading group, so that message is spreading and I think some of them started with your book and then moved to each of the others. So, they are working with students in trying to help the students who are struggling the most with evidence-based practices …and I think that’s becoming more and more common and it’s a wonderful thing.

Rebecca: So, I really liked Michelle that you were talking about faculty getting involved in retention and this idea of helping students develop persistence skills, and also administrators learning more about evidence-based practices. There’s these grassroots movements happening in both of these areas. Can you talk about some of the other grassroots movements that are working toward, or efforts that faculty are making to engage students and capture their attention and their excitement for education?

Michelle: Right, and here I think a neat thing to think about too is just it’s the big ambitious projects… the big textbook replacement projects or the artificial intelligence informed adaptive learning systems… those are the things that get a lot of the press and end up in The Chronicle of Higher Education that we read about… But, outside of that, there is this very vibrant community and grassroots led scene of developing different technologies and approaches. So, it really goes back for a while. I mean, the MERLOT database that I do talk about in Minds Online has been trove for years of well hidden gems that take on one thing in a discipline and come at it from a way that’s not just great from a subject-matter perspective but brings up the new creative approaches. In the MERLOT database, for example, there’s a great tutorial on statistical significance and the interrelationship between statistical significance and issues like simple sizes. You know, that’s a tough one for students, but it has a little animation involving a horse and a rider that really turns it into something that’s very visual… that’s very tangible… and it really actually tying into analogies, which is a well-known cognitive process that can support the advancement of learning something new. There is something on fluid pressures in the body that was treated for nursing students by nurses, and it’s got an analogy of a soaker hose that this is really fun and is actually interactive. So, those are the kinds of things. The PhET project, P-h-E-T which comes out of University of Colorado, that has been around for a while… again, faculty-led and a way to have these very useful interactive simulations for concepts in physics and chemistry. So, that’s one. CogLab, that’s an auxiliary product that I’ve used for some time in like hundred psychology courses that simulates very famous experimental paradigms which are notoriously difficult to describe on stage for cognitive psychology students. That started out many years ago as a project that very much has this flavor of “We have this need in our classroom. We need something interactive. There’s nothing out there. Let’s see what we can build.” It has since then picked up and turned into a commercial product, but that’s the type of thing that I’m seeing out there.

Another thing that you’ll definitely hear about if you’re circulating and hearing about the latest project is virtual reality for education. So, with this it seems like, unlike just a few years ago, almost everywhere you visit you’re going to hear that “Oh, we’ve just set up a facility. We’re trying out some new things.” This is something that I also heard about when I was talking to people when I was over in China. So, this is an international phenomenon. It’s going to pick up steam and definitely go some places.

What also strikes me about that is just how many different projects there are. Just when you’re worried that you’re going to be scooped because somebody else is going to get there first with their virtual reality project you realize you’re doing very very different things. So, I’ve seen, for example, it used in a medical application to increase empathy among medical students… and I took a six or seven minute demonstration that just was really heart-rending, simulating the patient experience with a particular set of sensory disorders… and at Northern Arizona University we have a lab that is just going full-steam in coming up with educational applications such as interactive organic chemistry tutorial that is is just fascinating. We actually completed a pilot project and are planning to gear up a much larger study next semester looking at the impacts of this. So, this is really taking off for sure.

But, I think there are some caveats here. We still really need some basic research on this… not just what should we be setting up and what the impacts are but how does this even work? In particular, what I would like to research in the future, or at least see some research on, is what kinds of students… what sort of student profile… really gets the most out of virtual reality for education. Because amidst all the very breathless press that’s going on about this now and all the excitement, we do have to remember this is a very, very labor intensive type of resource to set up. You’re not just going to go home and throw something together for the next week. It takes a team to build these things and to complete them as well. If you have, say, a 300 student chemistry course (which is not atypical at all… these large courses), you’re not going to just have all of them spend hours and hours and hours doing this even with a fairly large facility. It’s a very hands-on thing to guide them through this process, to provide the tech support, and everything else.

So, I think really knowing how we can best target our efforts in this area, so that we can build the absolute best, with the resources we have, and maybe even target and ask the students who are most likely to benefit. I think those are some of the things that we just need to know about this. So, it’s exciting for somebody like me who’s in the research area. I see this as a wonderful open opportunity… but those are some of the real crossroads we’re at with virtual reality right now.

Rebecca: I can imagine there’s a big weighing that would have to happen in terms of expense and time and resources needed to startup versus what that might be saving in the long run. I can imagine if it’s a safety thing that you want to do a virtual reality experience, like saving people’s lives and making sure that they’re not going to be in danger as they practice particular skills, could be a really good investment in these… spending the resources to make that investment… or if it’s a lot of travel that would just be way too expensive to bring a bunch of students to a particular location… but you could virtually… it seems like it would be worth the start-up costs and those are just two ideas off the top of my head where it would make sense to bend all of that resource and time.

John: …and equipment will get cheaper. Right now, it’s really expensive for computers that have sufficient speed and graphics processing capability and the headsets are expensive, but they will come down in price, but as you said, it’s still one person typically and one device… so it doesn’t scale quite as well as a lot of other tools or at least not at this stage.

Rebecca: From what I remember, Michelle, you wrote a blog post about [a] virtual reality experience that you had. Can you share that experience, and maybe what stuck with you from that experience?

Michelle: Right, so I had the opportunity, just as I was getting to collaborate with our incredible team at the immersive virtual reality lab at NAU… one of the things I was treated to was about an hour and a half in the virtual reality setup that they have to explore some of the things that they had… Giovanni Castillo, by the way, is creative director of the lab and he’s the one who was so patient with me through all this. We tried a couple of different things and of course there’s such a huge variety of different things that you can do.
There’s a few things out there like driving simulators that are kind of educational… they’re kind of an entertainment… but he was just trying to give me, first of all, just a view of those… and I had to reject a few of them… I will say, initially, because I am one of the individuals who tends to be prone to motion sickness. So, that limits what I can personally do in VR and that is yet another thing that we’re gonna have to figure out. At least informally, what we hear is that women in particular tend to experience more of this. So, I needed, first of all, to go to a very low motion VR. I wasn’t gonna be whizzing through these environments. That was not going to happen for me. So, we did something that probably sounds incredibly simplistic, but it just touched me to my core… which is getting to play with Google Earth. You can spin the globe and either just pick a place at random or what Giovanni told me is… “You know, I’ve observed that when people do this, when we have an opportunity to interact with Google Earth, they all either go to where they grew up or they’ll go to someplace that they have visited recently or they plan to visit. So, I went to a place that is very special to me and maybe it doesn’t fit into either one of those categories neatly, but it’s my daughter’s University… her school… and I should say that this is also a different thing for me because my daughter goes to school in Frankfort, Germany… an institute that is connected to a Museum. So, I had only been to part of the physical facility… the museum itself… and it was a long time ago… and part of it was closer to the holiday. So, this is my opportunity to go there and explore what it looks like all over… and so, that was an emotional experience for me. It was a sensory experience… it was a social one… because we were talking the whole time… and he’s asking me questions and what kinds of exhibits do they have here… and what’s this part of it. So, that was wonderful. it really did give me a feel for alright, what is it actually like to be in this sort of environment?

I’m not a gamer. I don’t have that same background that many of our students have. So, it got me up to speed on that… and it did show me how just exploring something that is relatively simple can really acquire a whole new dimension in this kind of immersive environment. Now the postscript that I talked about in that blog post was what happened when I actually visited there earlier in the year. So, I had this very strange experience that human beings have never had before… which is from this… I don’t know whether to call it deja vu or what… of going to the settings and walking around the same environment and seeing the same lighting and all that sort of stuff that was there in that virtual reality environment… but this time, of course, with real human beings in it and the changes… the little subtle changes that take place over time, and so forth.

So, how does it translate into learning? What’s it going to do for our students? I just think that time is going to tell. It won’t take too long, but I think that these are things we need to know. But, sometimes just getting in and being able to explore something like this can really put you back in touch with the things you love about educational technology.

Rebecca: I think one of the things that I’m hearing in your voice is the excitement of experimenting and trying something… and that’s, I think, encouragement for faculty in general… is to just put yourself out there and try something out even if you don’t have something specific in mind with what you might do with it. Experiencing it might give you some insight later on. it might take some time to have an idea of what you might do with it, but having that experience, you understand it better… it could be really useful.

John: …and that’s something that could be experienced on a fairly low budget with just your smartphone and a pair of Google cardboard or something similar. Basically, it’s a seven to twelve dollar addition to your phone and you can have that experience… because there’s a lot of 3D videos and 3D images out there on Google Earth as well as on YouTube. So, you can experience other parts of the world and cultures before visiting… and I could see that being useful in quite a few disciplines.

Rebecca: So, we always wrap up with asking what are you going to do next?

Michelle: I continue to be really excited about getting the word out about cognitive principles and how we can flow those in to teaching face-to-face with technology… everything else in between. So, that’s what I continue to be excited about… leveraging cognitive principles with technology and with just rethinking our teaching techniques. I’m going to be speaking at the Magna Teaching with Technology Conference in October, and so I’m continuing to develop some of these themes… and I’m very excited to be able to do that. I’m right now also… we’re in the early stages of another really exciting project that has to do with what we will call neuromyth… So, that may be a term that you’ve turn across in some of your reading. It’s something that we touched on a few times, I think, in our conversation today… the misconceptions that people have about teaching and learning and how those can potentially impact the choices we make in our teaching. So, I’ve had the opportunity to collaborate with this amazing international group of researchers who’s headed up by Dr. Kristen Betts of Drexel University… and I won’t say too much more about it other than we have a very robust crop of survey responses that have come in from, not just instructors, but also instructional designers and administrators from around the world. So, we’re going to be breaking those survey results down and coming up with some results to roll out probably early in the academic year and we’ll be speaking about that at the Accelerate conference, most likely in November. That’s put out by the Online Learning Consortium. So, we’re right in the midst of that project and it’s going to be so interesting to see what has the progress been? What neuromyths are still out there and how can they be addressed by different professional development experiences. We’re continuing to work on the Persistence Scholars Program on academic persistence. So, we’ll be recruiting another cohort of willing faculty to take that on in the fall at Northern Arizona University. I am going to be continuing to collaborate and really work with and hear from John and his research group with respect to the metacognitive material that they’re flowing into foundational coursework and ways to get students up to speed with a lot of critical metacognitive knowledge. So, we’re going to work on that too… and I like to keep up my blog and work on shall we say longer writing project but we’ll have to stay tuned for that.

Rebecca: Sounds like you need to plan some sleep in there too.

[LAUGHTER]

John: Well, it’s wonderful talking to you, and you’ve given us a lot of great things to reflect on and to share with people.

Rebecca: Yeah. Thank you for being so generous with your time.

John: Thank you.

Michelle: Oh, thank you. Thanks so much. It’s a pleasure, an absolute pleasure. Thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Theme music by Michael Gary Brewer. Editing assistance from Nicky Radford.

30. Adaptive Learning

Do your students arrive in your classes with diverse educational backgrounds? Does a one-size-fits-all instructional strategy leave some students struggling and others bored? Charles Dziuban joins us in this episode to discuss how adaptive learning systems can help provide all of our students with a personalized educational path that is based on their own individual needs.

Show Notes

In order of appearance:

Transcript

Coming soon!

26. Assessment

Dr. David Eubanks, created a bit of a stir in the higher ed assessment community with a November 2017 Intersection article critiquing common higher education assessment practices. This prompted a discussion that moved beyond the assessment community to a broader audience as a result of articles in the New York Times, The Chronicle of Higher Education, and Inside Higher Ed. In today’s podcast, Dr Eubanks joins us to discuss how assessment can help improve student learning and how to be more efficient and productive in our assessment activities.

Dr. Eubanks is the Assistant Vice President for Assessment and Institutional Effectiveness at Furman University and Board Member of the Association for the Assessment of Learning and Higher Education.

Show Notes

  • Association for the Assessment of Learning in Higher Education (AAHLE)
  • Eubanks, David (2017). “A Guide for the Perplexed.” Intersection. (Fall) pp. 14-13.
  • Eubanks, David (2009). “Authentic Assessment” in Schreiner, C. S. (Ed.). (2009). Handbook of research on assessment technologies, methods, and applications in higher education. IGI Global.
  • Eubanks, David (2008). “Assessing the General education Elephant.” Assessment UPdate. (July/August)
  • Eubanks, David (2007). “An Overview of General Education and Coker College.” in Bresciani, M. J. (2007). Assessing student learning in general education: Good practice case studies (Vol. 105). Jossey-Bass.
  • Eubanks, David (2012). “Some Uncertainties Exist.” in Maki, P. (Ed.). (2012). Coming to terms with student outcomes assessment: Faculty and administrators’ journeys to integrating assessment in their work and institutional culture. Stylus Publishing, LLC.
  • Gilbert, Erik (2018). “An Insider’s Take on Assessment.” The Chronicle of Higher Education. January 12.
  • Email address for David Eubanks: david.eubanks@furman.edu

Transcript

Rebecca: When faculty hear the word “assessment,” do they:(a) Cheer?; (b) Volunteer?; (c) Cry?; Or (d) Run away?

In this episode, we’ll review the range of assessment activities from busy work to valuable research.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

Rebecca: Today’s guest is David Eubanks, the Assistant Vice President for Assessment and Institutional Effectiveness at Furman and Board Member of the Association for the Assessment of Learning and Higher Education. Welcome, David.

John: Welcome.

David: Thank you. It’s great to be here. Thanks for inviting me.

John: Today’s teas are… Are you drinking tea?

David: No, I’ve been drinking coffee all day.

John: Ok, that’s viable.

Rebecca: We’ll go with that. We stop fighting it at this point.

David: Was I suppose to?

John: Well, it’s the name of the podcast…

David: Oh, oh, of course! No, I’m sorry. I’ve been drinking coffee all day… did not do my homework.

Rebecca: I’m having a mix of Jasmine green tea and black tea.

John: I’m drinking blackberry green tea.

David: I do have some spearmint tea waiting for me at home if that counts.

John: Okay. That works.

Rebecca: That sounds good. It’s a good way to end the day.

John: How did you get interested in and involved with assessment?

David: I wasn’t interested, I wanted nothing to do with it. So I was in the Math department at Coker College… started in 1991… and then the accreditation cycle rolls around every 10 years. So, I got involved in sort of the department level version of it, and I remember being read the rules of assessment as they existed then… and we wrote up these plans…. and I could sort of get the idea… but I really didn’t want much to do with it. This is probably my own character flaw. I’m not advocating this, I’m just saying this is the way it was. So I wrote this really nice report, and the last line of the report was something like: “it’s not clear who’s going to do all this work.” [LAUGHS] Because it sure wasn’t gonna be me… at least that was my attitude. But as the time went on …

Rebecca: I think that’s an attitude that many people share.

David: Right, yeah. As time went on, and I began to imbibe from the atmosphere of the faculty and began to complain about things, I got more involved in the data work of the university. Because, some of the things I was wanting to complain about had to do with numbers, like financial aid awards and stuff like that. So I ended up getting into institutional research, which was kind of a natural match for my training in Math… and I found that work really interesting… gathering numbers and trying to prognosticate about the future. But the thing is… as a small college institutional research is strongly associated with assessment, just because of the way things work… and so the next time the accreditation rolls around, guess who got put in charge of accreditation and assessment. [LAUGHS] So, I remember taking the manual home with all these policies that we were supposed to adhering to… and spreading everything out and taking notes and reading through this stuff and becoming more and more horrified. If it was a cartoon, my hair would have been standing up… and writing to the President saying: “You know… we’re not doing a lot of this… or if we are, I don’t know about it.” So that was sort of my introduction to assessment. And then, it was really at that point that I had to fill some responsibility for the administration on the whole college and making sure we were trying to follow the rules. So, it evolved from the faculty and not wanting anything to do with it, to turning to the dark side and being administrator and suddenly having to convince other faculty that they really needed to get things done. So that sort of the origin myth.

Rebecca: So, sort of a panic attack followed by…. [LAUGHTER]

David: Well yeah… multiple panic attacks. [LAUGHTER]

Rebecca: Yeah.

David: And then, so over the years as I get more involved with the assessment community, I started going to conferences and doing presentations and writing papers and eventually I got on the board of the AALHE, which is the national professional association for people who work in assessment… and started up s quarterly publication for them, which is still going… and so I think I have a pretty good network now within the assessment world…and have a reasonably good understanding of what goes on nationwide, but a particularly good understanding in the South because I also participate in accreditation reviews and so forth.

Rebecca: So like you, I think many other faculty cringe when they hear assessment when it is introduced to them as a faculty member. Why do you think assessment has such a bad rep?

David: Yeah, that’s the thing I’d like to talk about most. Well, part of the problem when we talk about it, and he’s and I think you’ll see this when you look at the articles in The Chronicle, in The New York Times, and the Inside Higher Ed, is that it means different things and people can very easily start talking across each other, rather than to each other… and I think in sort of a big picture… if you imagine the Venn diagram from high school math class and there’s three circles. One circle is kind of the teaching and learning stuff that individual faculty members get interested in at the course level or maybe a short course sequence level… their cluster of stuff… and then another one of those circles is the curriculum level, where we want to make sure that the curriculum makes sense and it sort of adds up to something in the courses if they’re…. calculus one, two, three… actually act like a cohesive set of stuff… and then there’s the third circle in the diagram and that’s where the problem is, I think. In the best world, we can do research… we can do real educational research on how students develop over time and how we affect them with teaching. But if we dilute that too much… if we back off of actual research standards and water it down to the point where it’s just very, very casual data collection… it’s still okay if we treat it like that… but I think what the rub becomes…. because of some expectations for many of us in accreditation, is that we collect this really informal data and then have to treat it as if it’s really meaningful, rather than using our innate intuition and experience as teachers and having experience with students. So I think the particulars… the rock in the shoe if you will… is the sort of forced and artificial piece of assessment that is most associated with the accreditation exercises.

John: Why does it break down that way? Why do we end up getting such informal data?

David: Well, educational research is hard for one thing. It’s a big fuzzy blob. If you think about what happens in order for a student to become a senior and write that senior thesis… just imagine that scenario for a minute… and we’re gonna try to imagine that the quality of that senior thesis tells us something about the program the student’s in. Well, the student had different sequences of courses than other students and in many cases… this wouldn’t apply to a highly structured program… For many of us, the students could have taken any number of courses… could have maybe double majored in something else… even within the course selections could have had different professors at different times a day… in different combinations… and so forth. So it’s very unstandardized… and bringing to that, the student then has his or her own characteristics…. like interests and just time limitations, for example… Maybe the students got a job or maybe the student’s not a native English speaker or something. There’s all sorts of traits of the individual student. Anyway, the point is that none of this is standardized. So that when we just look at that final paper that the student’s written, there are so many factors involved, we can’t really say, especially with very small amounts of data, what actually caused what. And my argument is that in the course, the professors in that discipline are in the best situation to, if they put their heads together and talked about what’s the product we’re getting out and what are the likely limitations or strengths of what we’re getting out, are in a really good position to make some informed subjective judgments that are probably much higher quality than some of the forced limited assessments… that are usually forced to be in a numerical scale like rubric ratings or maybe test scores or something like that. So I’m giving you kind of a long-winded answer, but I think the ambition of the assessment program is fine. It’s just that the execution within many many programs doesn’t allow that philosophy to be actually realized.

Rebecca: If our accreditation requirements require us to do certain kinds of assessment and we do the fluffy version, what’s the solution in having more rigorous assessment? or is it that we treat fluffy data as fluffy data and do what we can with that?

David: Right, well as always, it’s easier I think to point out a problem than it is to solve it necessarily. But I do have some ideas… some thoughts about what we could do that would give us better results than what we’re getting now. One of those is, if we’re going to do research, let’s do research. Let’s make sure that we have large enough samples… that we understand the variables and really make a good effort to try to make this thing work as research… and even when we do that, probably the majority of time, it’s going to fail somehow or another because it’s difficult. But at least, we’ll learn stuff that way.

Rebecca: Right.

David: Another way to think of it is if I’ve got a hundred projects with ten students in each one and we’re trying to learn something in these hundred projects, that’s not the same thing as one project with a thousand students in it, right?

Rebecca: Right.

David: It’s why we don’t all try to invent our own pharmaceuticals in our backyards. We let the pharmaceutical companies do that. It’s the same kind of principle. And so we can learn from people… maybe institutions who have the resources and the numbers… we could learn things about how students learn in the curriculum that are generalizable. So that’s one idea… if we’re going to do research, let’s actually do it. Let’s not pretend that something that isn’t research actually is. Another is a real oddity… That is, somehow way back when, somebody decided that grades don’t measure learning. And this has become an a dogmatic item of belief within much of the assessment community in my experience. It’s not hundred percent true but at least in action… and for example, I think there’s some standard advice you would get if you were preparing for your accreditation report: “Oh, don’t use grades as the assessment data because you’ll just be marked down for that.” But in fact, we can learn awful lot from just using the grades that we automatically generate. We can learn a lot about who completes courses and when they complete them. A real example that’s in that “Perplexed” paper is… looking at the data it became obvious that waiting to study a foreign language is a bad idea. The students who don’t take the foreign language requirement the first year they arrive at Furman look like, from the data, that they’re disadvantaged. They get lower scores if they wait even a year. And this is exacerbated, I believe, by students who are weaker to begin with waiting. So those two things in combination, they’re sort of the kiss of death. And this has really nothing to do with how the course is being taught, it’s really an advising process problem… and if we misconstrue it as a teaching problem, we could actually do harm, right? If we took two weeks to do remedial Spanish or whatever when we don’t really need to be doing that, we’re sort of going backwards.

Rebecca: We are blaming the faculty members for the things that aren’t a faculty member’s fault necessarily.

David: Exactly, right. What you just said is a huge problem, because much of the assessment… these little pots of data that are then analyzed are very often analyzed in a very superficial way… where, for example, they don’t take into account that expressed academic ability of the students who are in that class, or whatever it is you’re measuring. So if one year you just happen to have students who were C students in high school, instead of A students in high school, you’re going to notice a big dip in all the assessment ratings just because of that. It has nothing to do teaching necessarily. And at the very least, we should be taking that into account, because it explains a huge amount of the variance that we’re going to get in the assessment ratings. Better students get better assessment ratings, it’s not a mystery.

John: So, should there be more controls for student quality and studies over time of student performance? or should there be some value-added type approaches used for assessment, where you give students pre-tests and then you measure the post-test scores later, would that help?

David: Right, so I think there’s two things going on that are really nice in combination. One is the kind of information we get from grades, which mostly tells us how hard did the student work? how well were they prepared? how intelligent they are or whatever…. However you want to describe it. It’s kind of persistent. At my university the first year grade average of students correlates with their subsequent year’s grade average at 0.79. So it’s a pretty persistent trait. But one disadvantage is that, let’s say Tatiana comes in as an A+ student as a freshman, she’s probably going to be an A+ student as a senior. So we don’t see any growth, right? If we’re trying to understand how students develop, the grades aren’t going to tell us that.

John: Right.

David: So we need some other kind of information that tells us about development. And I’ve got some thoughts on that and some data on that if you want to talk about it, but it’s a more specialized conversation maybe then you want to have here.

John: Well, if you can give us an overview on that argument.

Rebecca: That sounds really interesting, and I’d like to hear.

David: Okay. Well, the basic idea is a “wisdom of the crowds” approach, in that when things are really simple… if we want to know if the nursing student can take a blood pressure reading… then (I assume, I’m not expert on this but I assume) that’s fairly cut and dried and we could have the student do it in front of us and watch them and check the box and say, “Yeah, Sally can do that”. But for many of the things we care about, like textual analysis or quantitative literacy or something, it’s much more complicated and very difficult to reduce to a set of checkboxes and rubrics. So, my argument is for these more complex skills and things we care about, the subjective judgment of the faculty is really valuable piece of information. So what I do is, I ask the faculty at the end of the semester, for something like student writing (because there’s a lot of writing across the curriculum): :”how well is your student writing?” and I ask them to respond on a scale that’s developmental. At the bottom of the scale is “not really doing college-level work yet.” That’s the lowest rating… the student’s not writing at a college level yet. We hope not to see any of that. And then at the upper end of the scale is “student’s ready to graduate.” “I’m the professor. According to my internal metric of what college student ought to be able to do, this student has achieved that.” The professors in practice are kind of stingy with that rating… but what it does is then it creates another data set that does show growth over time. In fact, I had a faculty meeting yesterday… showed them the growth over time in the average ratings of that writing effectiveness scale over four years. If I break it up by the students entering high school grades those are three parallel lines stacked with high grades, medium grades, and low grades in parallel lines. So the combination of those two pieces: grade-earning ability and professional subjective judgment after a semester of observation, seems to be a pretty powerful combination. I can send you the paper on that if you’re interested.

John: Yes.

Rebecca: Yeah, that will be good. Do you do anything to kind of norm how faculty are interpreting that scale?

John: Inter-rater reliability.

David: Right, exactly. That’s a really good question and reliability is one of the first things I look at… and that question by itself turns out to be really interesting. I think when I read research papers it seems like a lot of people think of the reliability as this checkbox that I have to get through in order to talk about stuff I really want to talk about… because if it’s not reliable then I don’t have anything I need to talk about… and I think that’s unfortunate because just the question of “what’s reliable and what’s not” generates lots of interesting questions by itself. So, I can send you some stuff on this too, if you like. But, for example, I got this wine rating data set where these judges blind taste flights of wine and then they have to give it a 1 to 4 scale rating. And this guy published a paper on it and I asked for his data. And so I was able to replicate his findings which were that what the wine tasters most agreed on was when wine tastes bad. If it’s yeah if it’s yucky, we all know it’s yucky. It’s at the upper level when it starts to become an aesthetic that we have trouble agreeing. The reason this is interesting is because usually reliability is just one number, you say how reliable is the judges’ rating and you get .5. That’s it. That’s all the information you get, it’s .5. So what this does is it breaks it down into more detail. So when I do that with the writing ratings, what I find is that our faculty at this moment in time, are agreeing more about “what’s ready to graduate….” and not really about that crucial distinction between not doing college-level writing and the intro college-level writing.

Rebecca: That’s really fascinating. You would almost think it’d be the opposite.

John: I was astounded by this. Yes. And so I got some faculty members together and asked some other faculty members to contribute writing samples that they thought some were good and some were bad. So that I have a clean set to try to test this with and watching them do it.

Rebecca: Right.

David: So yeah, we got in the room and we talked about this, and what I discovered was not at all what I expected. I expected that students would get marked down on the writing if they had lots of grammar and spelling errors and stuff like that. But we didn’t have any papers like that… even the ones that were submitted as the bad papers didn’t have a lot of grammatical errors. So I think that the standards for what the professor’s expect for entry-level writers is really high. And because it’s high, we’re not necessarily agreeing on where those lines are… and that’s where the conversation needs to be for the students sake, right? It’s never going to be completely uniform, but just knowing that this disagreement exists is really advantageous because now we can have more conversations about it.

Rebecca: Yeah, it seems like a great way to involve a teaching and learning center… to have conversations with faculty about what is good writing… what should students come in with… and what those expectations are… so that they start to generate a consensus, so that the assessment tool helps generate the opportunity for developing consensus.

David: Yes, exactly, and I think that’s the best use for assessment is when it can generate really substantive conversations among the faculty who are doing the work of giving the assignments and giving the grades and talking to students.

Rebecca: So, how do we get the rest of the accreditation crowd to be on board with this idea?

David: That’s a really interesting question. I’ve spent some time thinking about that. I think it’s possible. I’m optimistic that we can get some movement in that direction. I don’t think a lot of people are really happy with the current system, because there are so many citations for non-compliance that it’s a big headache for everybody. There are these standards saying every academic program is supposed to set goals… assess whether or not those are being achieved… and then make improvements based on the data you get back. That all seems very reasonable, except that when you get into it and you approach it as this really reductive positivist approach, it implies that the data is really meaningful when in many cases it’s not, so you get stuck. And that’s where the frustration is. So I think one approach is if we can get people to reconsider the value of grades, first of all. And if you can imagine the architecture we’ve setup, it’s ridiculous. So imagine these two parallel lines, on the top we’ve got grades and then there’s an arrow that leads into course completion… because you have to get at least a D usually… and then another arrow that leads into retention (because if you fail out of enough classes you can’t come back or you get discouraged) and that leads to graduation, which leads to outcomes after graduation — like grad school or a career or something. So, that’s one whole line, and that’s been there for a long time. Then under that, what we’ve done is constructed this parallel grading system with the assessment stuff that explicitly disavows any association with any of the stuff on the first line. That seems crazy. What we should have done to begin with is said, “oh, we want to make assessment about understanding how we can assign better grades and give better feedback to students. So they’ll be more successful, so they’ll graduate and have outcomes,” right? That all makes sense. So I think the arguments there… turn the kind of work we’re doing now into a more productive way to feed into the natural epistemology of the institution rather than trying to create this parallel system. That doesn’t really work very well in a lot of cases.

Rebecca: Sounds to me what you’re describing is… right now a lot of assessment is decentralized into individual departments… but I think what you’re advocating for is that it becomes a little more centralized, so that you can start looking at these big picture issues rather than these miniscule little things that you don’t have enough of a data set to study, is that true?

David: Absolutely, yes, absolutely. Some things we just can’t know without more data, partly because the data that we do get is going to be so noisy that it takes a lot of samples to average out the noise. So yes, in fact that’s what I try to do here…. Generate reports based on the data that I have that are going to be useful for the whole University as well as reports that are individualized to particular programs.

Rebecca: Do you work with individual faculty members to work on the scholarship of teaching and learning so maybe there’s something that in particular that they’re interested in studying and given your role in institutional research and assessment? Do you help them develop studies and help collect the data that they would need to find those answers?

David: Yes, I do when they request it or I discover it. It’s not something that I go around and have an easy way to inventory, because there’s a lot of it going on I don’t know about.

Rebecca: Right.

David: I’d say more of my work is really at the department level and this part of assessment is really easy. If you’re in an academic department so much of the time that the faculty meet together gets sucked up with stuff like hiring people, scheduling courses, setting the budget for next year and figuring out how to spend it, selecting your award students, all that stuff can easily consume all the time of all the faculty meetings. So, really just carving out a couple of hours a semester, or even a year, to talk about what is it we’re all trying to achieve and here’s the information… however imperfect it is… that we know about it, can pay big dividends. I think a lot of times that’s not what assessment is seen as. It’s seen as, “oh, it’s Joe’s job this year to go take those papers and regrade them with a rubric, and then stare at it long enough until he has an epiphany about how to change the syllabus.” That’s a bit of a caricature, but there is a lot of that that goes on.

Rebecca: I think it’s my job this year to… [LAUGHS]

David: Oh, really?

John: In the Art department, yeah. [LAUGHS]

Rebecca: I’m liking what you’re saying because there’s a lot of things that I’m hearing you say that would be so much more productive than some of the things that we’re doing, but I’m not sure how to implement them in a situation that doesn’t necessarily structurally buy into the same philosophy.

John: And I think faculty tend to see assessment as something imposed on them that they have to do and they don’t have a lot of incentives to improve the process of data collection or data analysis and to close the loop and so forth. But perhaps if this was more closely integrated into the coursework and more closely integrated into the program so it wasn’t seen as (as you mentioned) this parallel track, it might be much more productive.

David: Right, and one thing I think we could do is ask for reports on grades. Grade completions… there’s all sorts of interesting things that are latent to grades and also course registration. For example, I created these reports… imagine a graph that’s got 100, 200, 300, 400, along the bottom axis… and those are the course levels. I wanted to find out when are students taking these courses. So what you’d expect is the freshmen are taking 100-level courses and the sophomores are taking 200 on average and so forth, right? But whenever I created these reports for each major program, I discovered that there were some oddities… that there were cases for 400-level courses were being taken by students who were nowhere near seniors. So I followed up and I asked this faculty member what’s going on, and it turned out to just be a sort of weird registration situation that doesn’t normally happen, but it had turned out that there were students in that class who probably shouldn’t have been in there. And she said “Thanks for looking into this because, I’m not sure what to do.” So that sort of thing could be routinely done with the current computing power we have now. I think there’s a lot you could ask for that would be meaningful without having to do any extra work, if somebody in the IR or assessment offices is willing to do that.

Rebecca: That’s a good suggestion.

David: And so in the big picture, how do we actually change the accreditor’s mind? It’s not so much really the accreditors, the accreditors do us a great service, I think, by creating this peer-review system. In my experience it works pretty well. The issue I think within the assessment community is that there are a lot of misunderstandings about how this kind of data, these little small pools of data, can be used and what they’re good for. And so what I’ve seen is a lot of attention to the language around assessment during an accreditation review: are the goals clearly stated… it’s almost like did you use the right verb tense, but I’ve never seen that literally. [LAUGHTER] No, there’s pages of words: are there rubrics? do the rubrics look right? and all this stuff and then there’s a few numbers and then there’s supposed to be some grand conclusion to that. It’s not all like that, but there’s an awful lot of it like this so if you’re a faculty member stuck in the middle of it, you’re probably the one grading the papers with a rubric that you already graded once. And you tally up those things and then you’re supposed to figure out something to do with those numbers. So, this culture persists because the reviewers have that mindset that all these boxes have to be checked off. There’s a box for everything except data quality. [LAUGHS] No, literally… if there’s a box for data quality everything would fall apart immediately. So we have to change that culture. We have to change the reviewer culture, and I think one step in doing that is to create a professional organization or using one that exists, like in accounting and librarianship. They have professional organizations that set their standards, right? We don’t have anything like that on assessment. We have professional organizations, but they don’t set the standards. The accreditors have grown (accidentally, I think) into the role of being like a professional organization for assessment. They’re not really very well suited for that. And so, if we had a professional organization setting standards for review that were acknowledging that the central limit theorem exists, for example, then I think we could have a more rational self-sustaining, self-governing system. Hopefully get away from causing faculty members to do work that’s unnecessary.

John: I don’t think any faculty members would object to that.

David: Well, of course not. I mean, you know, everybody’s busy…. you want to do your research… you got students knocking on the door… you gotta prepare for class. And really it’s not just that we’re wasting faculty members time if these assessment numbers that result weren’t good for anything. It’s also the opportunity cost. What could we have done, researching the course completion that would have, by now in the last twenty years we’ve been doing this, saved how many thousands of students. You know there’s a real impact to this, so I think we need to fix it.

John: How have other people in the assessment community reacted to your paper and talks?

David: Yeah, that’s a very interesting question. What has not happened is that nobody’s written me saying “No, Dave you’re wrong. Those samples, those numbers we get from from rating our students are actually really high-quality data.” Now, in fact, probably every institution has some great examples where they’re doing really excellent work trying to measure student learning. Like maybe they’re doing a general education study with thousands of students or something. But, down at the department level, if you’ve only got ten students like some of our majors might have, you really can’t do that kind of work. So I haven’t had anybody even address the question and the response articles saying that, “no, you’re wrong because the data is really good” because the other conclusion if you believe the data is good – the other conclusion is that the faculty are just not using it, right? Or somebody’s not using it. So I guess the rest of the answer the question is the assessment community, I think, is rallying around the idea naturally that they feel threatened by this, and undoubtedly there are faculty members making their lives harder in some cases. That’s unfortunate. It wasn’t my intention. The assessment director is caught in the middle because they are ultimately responsible to what happens when the accreditor comes and reviews them. The peer review team, right? So it’s like a very public job performance evaluation when that happens and so it depends on what region you’re in – there are different levels of severity, but it can be a very very unpleasant experience to have one of those reviews done with somebody who’s got a very checkboxy sort of attitude. It’s not really looking at the big picture and what’s possible, but looking instead at the status of idealistic requirements.

Rebecca: So the way to get the culture shift, in part, requires the the accreditation process to to see a different perspective around assessment… otherwise the culture shift probably won’t really happen.

David: Right, we have to change the reviewers mindset and that’s going to have to involve the accreditors to the extent that their training those reviewers. That’s my opinion.

Rebecca: What role, if any, do you see teaching and learning centers having in assessment in the research around assessment?

David: Well, that’s one of those circles in my Venn diagram you recall, and I think, it’s absolutely critical for the kind of work that has an impact on students, because t’s more focused than say program assessment‘s very often trying to the whole program… which, as I noted, has many dimensions to it. Whereas, a project that’s like a scholarship of teaching and learning project or just a course-based project may have a much more limited scope and therefore has a higher chance of seeing a result that seems meaningful. I don’t think our goal in assessment in that case is to try to prove mathematically that something happened, but to reach a level of belief on the part of those involved that “yes, this is probably a good program that we want to keep doing.” So, I think if the assessment office is producing generalizable information or just background information that would be useful in that context, like “here’s the kind of students we are recruiting,” “here’s how they perform in the classroom” or some other characteristic. For example, we have very few women going into economics. Why is that? Is that interesting to you economists? So those those kinds of questions can be brought from the bigger data set down to those kinds of questions probably.

Rebecca: You got my wheels turning, for sure.

David: [LAUGHS] Great!

Rebecca: Well, thank you so much for spending some of your afternoon with us, David. I really appreciate the time that you spent and all the great ideas that you’re sharing.

John: Thank you.

David: Well, it was delightful to talk to you both. I really appreciate this invitation, and I’ll send you a couple of things that I mentioned. And if you have any other follow-up questions don’t hesitate to be in touch.

Rebecca: Great. I hope your revolution expands.

David: [LAUGHS] Thank you. I appreciate that. A revolution is not a tea party, right?

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

24. Gender bias in course evaluations

Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than professor? For some, this may sound all too familiar. In this episode, Kristina Mitchell, a Political Science Professor from Texas Tech University, joins us to discuss her research exploring gender bias in student course evaluations.

Show Notes

  • Fox, R. L., & Lawless, J. L. (2010). If only they’d ask: Gender, recruitment, and political ambition. The Journal of Politics, 72(2), 310-326.
  • MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303.
  • Miller, Michelle (2018). “Forget Mentors — What We Really Need are Fans.” Chronicle of Higher Education. February 22, 2018..
  • Mitchell, Kristina (2018). “Student Evaluations Can’t Be Used to Assess Professors.Salon. March 19, 2018.
  • Mitchell, Kristina (2017). “It’s a Dangerous Business, Being a Female Professor.Chronicle of Higher Education. June 15, 2017.
  • Mitchell, Kristina M.W. and Jonathan Martin. “Gender Bias in Student Evaluations.” Forthcoming at PS: Political Science & Politics.

Transcript

Rebecca: Have you ever received comments in student evaluations that focus on your appearance, your personality, or competence? Do students refer to you as teacher or an inappropriate title, like Mr. or Mrs., rather than Professor? For some, this may sound all too familiar. In this episode, we’ll discuss one study that explores bias in course evaluations.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.
Today our guest is Kristina Mitchell, a faculty member and director of the online education program for the Political Science Department at Texas Tech. In addition to research in international trade and globalization, Kristina has been investigating bias in student evaluations, motherhood and academia, women in leadership and academia, among other teaching and learning subjects. Welcome Kristina.

Kristina: Thank you.

John: Today our teas are?

Kristina: Diet coke. Yes, I’ve got a diet coke today.

[LAUGHTER]

Rebecca: At least you have something to drink. I have Prince of Wales tea.

John: …and I have pineapple ginger green tea.

John: Could you tell us a little bit about your instructional role at Texas Tech?

Kristina: Sure, so when I started at Texas Tech six years ago, I was just a Visiting Assistant Professor teaching a standard 2-2 load… so, two face-to-face courses in every semester, but our department was struggling with some issues in making sure that we could address the need for general education courses. So in the state of Texas every student graduating from a public university is required to take two semesters of government (we lovingly call it the “Political Science Professor Full Employment Act”) and so what ends up happening at a university like Texas Tech with pushing forty thousand students almost, is that we have about five thousand students every semester that need to take these courses… and, unless we’re going to teach them in the football stadium, it became really challenging to try and meet this demand. Students were struggling to even graduate on time, because they weren’t able to get into these courses. So, I was brought in and my role was to oversee an online program in which students would take their courses online asynchronously. They log in, complete the coursework on their own time (provided they meet the deadlines), and I’m in a supervisory role. My first semester doing this I was the instructor of record, I was managing all of the TAs, I was writing all the content, so I stayed really busy with that many students working all by myself. But now we have a team of people: a co-instructor, two course assistants, and lots of graduate students. So, I just kind of sit at the top of the umbrella, if you will, and handle the high level supervisory issues in these big courses.

John: Is it self-paced?

Kristina: It’s self-paced with deadlines, so the students can complete their work in the middle of the night, or in the daytime or whenever is most convenient for them, provided they meet the deadlines.

Rebecca: So, you’ve been working on some research on bias in faculty evaluations. What prompted this interest?

Kristina: What prompted this was my co-instructor, a couple of years ago, was a PhD student here at Texas Tech University and he was helping instruct these courses and handle some of those five thousand students… and as we were just anecdotally discussing our experiences in interacting with the students, we were just noticing that the kinds of emails he received were different. The kinds of things that students said or asked of him were different. They seemed to be a lot more likely to ask me for exceptions… to ask me to be sympathetic…. to be understanding of the student situation… and he just didn’t really seem to find that to be the case. So of course, as political scientists, our initial thought was: “we could test this.” We could actually look and see if this stands up to some more rigorous empirical evaluation, and so that’s what made us decide to dig into this a little deeper.

John: …and you had a nice sized sample there.

Kristina: We did. Right now, we have about 5000 students this semester. We looked at a set of those courses. We tried to choose the course sections that wouldn’t be characteristically different than the others. So, not the first one, and not the last one, because we thought maybe students who register first might be characteristically different than the students who register later. So, we took we chose a pretty good-sized sample out of our 5,000 students.

John: …and what did you find?

Kristina: So, we did our research in two parts. The first thing we looked at was the comments that we received. As I said, our anecdotal evidence really stemmed from the way students interacted with us and the way they talked to us. We wanted to be able to measure and do some content analysis of what the students said about us in their course evaluations. So, we looked at the formal in-class university-sponsored evaluation, where the students are asked to give a comment on their professors… and we looked at this for both our face-to-face courses that we teach and the online courses as well. And what we were looking for wasn’t whether they think he’s a good professor or a bad professor, because obviously if we were teaching different courses, there’s not really a way to compare a stats course that I was teaching to a comparative Western Europe course that he was teaching. All we were looking at was what are the themes? What kinds of things do they talk about when they’re talking about him versus talking about me? What kind of language do they use and we also did the same thing for informal comments and evaluation? So, you have probably heard of the website “Rate My Professors”?

John: Yes.

[LAUGHTER]

Kristina: Yes, everyone’s heard of that website and none of us like it very much… and let me tell you, reading through my “Rate My Professors” comments was probably one of the worst experiences that I’ve had as a faculty member, but it was really enlightening in the sense of seeing what kinds of things they were saying about me… and the way they were talking about me versus the way they were talking about him. So again, maybe he’s just a better professor than I am… so we weren’t looking for positive or negative. We were just looking at the content theme… and so the kinds of themes we looked at were: Does the student mention the professor’s personality? Do they say nice… or rude… or funny? Do they mention the professor’s appearance? Do they say ugly… pretty? Do they comment on what he or she is wearing? Do they talk about the competence, like how how well-qualified their professor is to teach this course and how do they refer to their professor? Do they call their professor a teacher? Or do they call their professor rightfully a professor? And these are the categories that we really noticed some statistically significant differences. So we found that my male co-author was more likely to get comments that talked about his competence and his qualification and he was much more likely to be called professor… which is interesting because at the time he was a graduate student. So, he didn’t have a doctorate yet… he wouldn’t really technically be considered a professor… and on the other hand when we looked at comments that students wrote about me, whether they were positive or negative… nice or mean comments… they talked about my personality. They talked about my appearance and they called me a teacher. So whether they were saying she’s a good teacher or a bad teacher… that’s how they chose to describe me.

Rebecca: That’s really fascinating. I also noticed, not just students having these conversations, but in the Chronicle article that you published, there was quite a discussion that followed up related to this topic as well, and in that there was a number of comments where women responded with empathetic responses and also encouraged some strategies to deal with the issues. But, then there was at least one very persistent person, who kept saying things like: “males also are victimized.” How do we make these conversations more productive and is there something about the anonymity of these environments that makes these comments more prevalent?

Kristina: I think that’s a really great question. I wish I had a full answer for you on how we could make conversations like this more productive. I definitely think that there’s a temptation for men who hear these experiences to almost take it personally… as though when I write this article, I’m telling men: “You have done something wrong…” when that’s not really the case… and, my co-author, as we were looking at these results about the comments and as we were reading each other’s comments, so we could code them for what kinds of themes we were observing… he was almost apologetic. He was like: “Wow, I haven’t done anything to deserve these different kinds of comments that I’m getting. You’re a perfectly nice woman, I don’t know why they’re saying things like this about you.” So, I think framing the conversation in terms of what steps can we take to help, because if I’m just talking about how terrible it is to get mean reviews on Rate My Professors, that’s not really giving a positive: “Here’s a thing that you can do to help me…” or “Here’s something that you can do to advocate for me.” So, I think a lot of times what men who are listening need… maybe they’re feeling helpless… maybe they’re feeling defensive…. What they need is a strategy. Something they can do going forward to help women who are experiencing these things.

Rebecca: I noticed that some of the comments in relationship to your Chronicle article indicated ways that minimize your authoritative role to avoid certain kinds of comments and I wonder if you had a response to that… and I think we don’t want to diminish our authoritative roles as faculty members, but I think that sometimes those are the strategies that we’re often encouraged to take.

Kristina: I agree, I definitely noticed that a lot of the response to how can we prevent this from happening got into “How can we shelter me from these students,” as opposed to “How can we teach these students to behave differently.” I definitely think the anonymous nature of student evaluation comments and Rate My Professors and internet comments in general. You definitely notice when you go to an internet comment section that anonymous comments tend to be the worst one. …and so the idea that what we’re observing, it’s not that an anonymous platform causes people to behave in sexist ways, It’s that there’s underlying sexism and the anonymous nature of these platforms just gives us a way to observe the underlying sexism that was already there. So the important thing is not to take away my role as the person in charge. The important thing is to teach students, and both men and women, that women are in positions of authority and that there’s a certain way to communicate professionally. Student evaluations can be helpful. I’ve had helpful comments that help me restructure my course. So, it’s a way to practice engaging professionally and learning to work with women. My students are going to work for women and with women for the rest of their lives. They need to learn, as college students, how to go about doing that.

John: Do you have any suggestions on how we could encourage that they’re part of the culture and in individual courses the impact we have is somewhat limited. What can we do to try to improve this?

Kristina: Well, I’ve definitely made the case previously to others on my campus and at other campuses that the sort of lip service approach to compliance with things like Title 9 isn’t enough. So, I don’t know if there at your institution there’s some sort of online Title 9 training, where you know…

John: Oh, yeah…

Kristina: …you watch a video

Rebecca: Yeah…

Kristina: … you watch a video… you click through the answers… it tells you: “are you a mandatory reporter?” and “what should you do in this situation?” …and I think a lot of people don’t really take that very seriously; it’s just viewed as something to get through so that the university cannot be sued in the case that something happens. So, I don’t think that that’s enough. I think that more cultural changes and widespread buy-in are a lot more important than making sure everyone takes their Title 9 training. So, in our work I mentioned that we did this in two parts, and the second part just looked at the ordinal evaluations. The 1 to 5 scale, 5 being the best… rank your professor how effective he or she is… and not only are students perhaps not very well qualified to evaluate pedagogical practices, but once again we found that even in these identical online courses, a man received higher ordinal evaluations than a woman did. And so what this tells me is in a campus culture we should stop focusing on using student evaluations in promotion and tenure, because they’re biased against women… and we should stop encouraging students to write anonymous comments on their evaluations. We should either make them non-anonymous or we should eliminate the comment section all together. Just because if we’re providing a platform it’s almost sanctioning this behavior. If we’re saying, “we value what you write in this comment,” then we’re almost telling students your sexist comment is okay and it’s valued and we’re going to read it… and that’s not a culture that’s going to foster positive environment for women.

John: Especially when the administration and department review committees use those evaluations as part of the promotion and tenure review process.

Kristina: Exactly. I mean when I think about the prospect of my department chair or my Dean reading through all the comments that I had to read through when I did this research, I’m pretty sure that he would get an idea of who I am as a faculty member that, to me…maybe I’m biased… but to me, is not very consistent with actually what happens in my classroom.

Rebecca: It’s interesting that anonymity.. right, we talk about anonymity providing more of a platform for this become present. But I’ve also had a number of colleagues share their own examples of hate speech and inappropriate sexual language when anonymity wasn’t a veil that they could hide behind, increasingly more recently. So I wonder, if your research shows any increase in this behavior and why?

Kristina: We haven’t really looked at this phenomenon over time. That’s just not something that we’ve been able to look at in our data, but I would like to continue to update this study. I definitely think that… current political climate is creating an atmosphere where perhaps people don’t feel that saying things that are racist or sexist are as shameful as they once perceived them to be. So there’s definitely a big stigma against identifying yourself as Nazi or even Nazi adjacent and that stigma, while it’s still there, the stigma against it seems to be lessening a little bit. I don’t know necessarily that I’ve seen an increase in what kinds of behavior I’m observing from my students, but I definitely will say that a student… an undergraduate student… gave me his number on his final exam this last semester like I was going to call him over the summer. So, it definitely happens in non-anonymous settings too.

John: Now there have been a lot of studies that have looked at the effect of gender on course evaluations, and all that I’ve seen so far find exactly the same type of results. That there’s a significant penalty for being female. One of those, if I remember correctly (and I think you referred to it in your paper), was a study where… it was a large online collection of online classes, where they changed the gender identity of the presenters randomly in different sections of the course, and they found very different types of responses and evaluations.

Kristina: Yes, that was definitely a study that that… I hate to say we tried to emulate because we were limited in what we could do in terms of manipulating the gender identity of the professor… but I think that their model is just one of the most airtight ways to test this. I agree, this is definitely something that’s been tested before. We’re not the first ones to come to this conclusion… I think our research design is really strong in terms of the identical nature of the online courses. At some point, I find myself… when I when I was talking about this research with a woman in political science who’s a colleague of mine… the question is how many times do we have to publish this before people are going to just believe us… that it’s the case. The response tends to be: “Well, maybe women are just worse professors or maybe there’s some artifacts in the data that is causing this statistically significant difference.” I don’t know how many times we have to publish it before before administrations and universities at large take notice… that this is a real phenomenon… that’s not just a random artifact of one institution or one discipline.

John: It seems to be remarkably robust across studies. So, what could institutions do to get around this problem? You mentioned the problem with relying on these for review. Would peer evaluation be better, or might there even be a similar bias there?

Kristina: I definitely think peer evaluation is an alternative that’s often presented, when we’re thinking of alternative ways to evaluate teaching effectiveness. Peer evaluation may be subject to the same biases. So, I don’t know that literature well enough off the top of my head, but I imagine that it could suffer from the same problems in terms of faculty members who are women… faculty members of color… faculty members with thick accents, with English that’s difficult to understand… might still be dinged on their peer evaluations. Although we would hope that people who are trained in pedagogy who’ve been teaching would be less subject to those biases. We could also think about self evaluation. Faculty members can generate portfolios that highlight their own experiences, and say here’s what I’m doing the classroom that makes me a good teacher… here are the undergraduate research projects I’ve sponsored… here the graduate students who’ve completed their doctoral degrees under my supervision… and that’s a way to let the faculty member take the lead in describing his or her own teaching. We could also just weight student evaluations. We know that women receive 0.4 points lower on a five-point scale, then we could just bump them up by 0.4. None of these solutions are ideal. But, I think some of the really sexist and misogynist problems in terms of receiving commentary, that is truly sexually objectifying female professors… that could be eliminated with almost any of these solutions. Peer evaluation… removing anonymous comments… self-evaluation…. and that’s really the piece that is the most dramatically effective in women being able to experience higher education in the same way that men do.

Rebecca: So, obviously if there’s this bias in evaluations then there’s likely to be the same bias within the classroom experience as well. We just don’t necessarily have an easy way of measuring that. But if you’re using teaching strategies that use dialogue and interactions with students rather than a “sage on the stage” methodology, I think that in some cases we make ourselves vulnerable and that does help teaching and learning, because it helps our students understand that we’re not you perfectly experts in everything… that we have to ask questions and investigate and learn things too… and that can be really valuable for students to see. But we also want to make sure that we don’t undermine our own authority in the classroom either. Do you have any strategies or ideas around around like that kind of in-class issue?

Kristina: Yeah, I think that the bias against women continues to exist just in a standard face-to-face class. One time, when I was teaching a game theory course, I was writing an equation on the board and it was the last three minutes of class and we’re trying to rush through you the first-order conditions and all sorts of things… and I had written the equation wrong, and as soon as my students left the classroom I looked at it and I went, “oh my gosh, I’ve written that incorrectly,” and so the next day when they came back to class, I I felt like I had two choices: we could either just move on and I could pretend like it never happened, or I could admit to them, that I taught this wrong… I wrote this wrong. So I did. I told them “Rip out the page from yesterday’s notes because that formula is wrong,” and I rewrote it on the board… and I got a specific comment in my evaluation, saying she doesn’t know what she’s talking about.. that she got that she got this thing wrong… and it was definitely something that, while I don’t have an experimental evidence that says that if a man does the same thing you won’t get penalized in the same way, to me it very much wrapped into that idea that women are are perceived as less qualified as men. So whether it’s because we’ll refer to as teachers or whether it’s because the student evaluations focused more on men’s competence, women are just seen as less likely to be qualified. How many times have you had a male TA and the students go up to the TA to ask questions about the course instead of you. So, I definitely think it’s difficult for women in the classroom to maintain that authority, while still acknowledging that they don’t know everything about everything No professor could. I mean we all think we do of course…. So, I think owning some of the fact that there are things you don’t know is important, no matter what your gender is, but I also try to prime my students I tell them about the research that I do. I tell them about the consistent studies in the literature that exists that shows that students are more likely to perceive and talk about women differently, because I hope that just making them aware that this is a potential issue, might adjust their thinking. So that if they start thinking “wow, my professor doesn’t know what she’s talking about” they might take a moment, and think “would I feel the same way if my professor were a man.”

Rebecca: I think that’s an interesting strategy. We found the similar kind of priming of students about evidence-based practices in the classroom works really well… and getting students to think differently about things that they might be resistant to… So, I could see how that that might work, but I wonder how often men do the same kind of priming on this particular topic.

Kristina: I don’t know. That would be an interesting next experiment to run if I were to do a treatment in two classes face-to-face classes and and you know do have a priming effect for a woman teaching a course versus a man and seeing if it had any kind of different effect. I think a lot of times men perhaps aren’t even aware that these issues exist. So, talking about the way that women experience teaching college in a different way… if men aren’t having this conversation in their classroom, it’s probably not because they’re thinking, “oh man, I really hope my female colleagues get bad evaluations so that they don’t get tenure.” It’s probably just because they aren’t really thinking about this as an issue… just because as a sort of white man in higher education you very much look like what professors have looked like for hundreds of years… and so it’s just a different experience, and perhaps something that men aren’t thinking about… and that’s why I’m getting the message out there so important because so many men want to help. They want to make things more equitable for women and I think when they’re made aware of it, and given some strategies to overcome it, they will. I’ve definitely found a lot of support in a lot of areas in my discipline.

John: …and things like your Chronicle article there’s a good place to start too… just making this more visible more frequently and making it harder for people to ignore.

Kristina: I agree. I think being able to speak out is really important, and I know sometimes women don’t want to speak out, either because they’re not in a position where they can or because they’re fearing backlash from speaking out. So, I think it’s on those of us who are in positions where we can speak up. I think it falls on us to try and say these things out loud, so that women who can’t… their voices are still heard.

John: Going back to the issue of creating teaching portfolios for faculty… that’s a good solution. Might it help if they can document the achievement of learning outcomes and so forth, so that that would free you from the potential of both student bias and perhaps peer bias. So that if you can show that your students are doing well compared to national norms or compared to others in the department, might that be a way of perhaps getting past some of these issues?

Kristina: I definitely think that’s a great place to start, especially in demonstrating what your strategies are to try and help your students achieve these learning outcomes. I always still worry about student level characteristics that are going to affect whether students can achieve learning outcomes or not. Students from disadvantaged backgrounds… students from underrepresented groups… students who don’t come to class or who don’t really care about being in class… these are all students who aren’t going to achieve the learning outcomes at the same rate as students who come to class… who are from privileged backgrounds… and so putting it on a professor alone to make sure students achieve those learning outcomes, still can suffer from some things that aren’t attributable to the professor’s behavior.

John: As long as that’s not correlated across sections, though, that should get swept out. As long as the classes are large enough to get reasonable power.

Kristina: Yeah, absolutely. I think it’s definitely it’s time for more evaluation into into how these measures are useful. I know there’s been a lot of articles in the New York Times op-ed, I think there was one in Inside Higher Ed, really questioning some of these assessment metrics. So, I think the time is now to really dig into these and figure out what they’re really measuring.

Rebecca: You’ve also been studying bias related to race and language, can you talk a little bit about this research?

Kristina: Yes, so this is a piggyback project after after I got finished with the gender bias paper, what I really wanted to do was get into race, gender, and accented English. Because I think not only women are suffering when we rely on student evaluations, it’s people of different racial and ethnic groups… it’s people whose English might be more difficult to understand. What we were able to do in this work is control for everything. So, we taught completely identical online courses the only difference we didn’t even I didn’t even allow the professors to interact with the students via email. I told them to make sure I… like Cyrano de Bergerac…writing all of their emails for them over a summer course and so they were handling the course level stuff just not the student facing things. They were teaching their online course but they weren’t directly interacting with the students in a way that wasn’t controlled… and the the faculty members recorded these welcome videos, which had their face… it had their English, whether it was accented or not… and I’m I asked some students who weren’t enrolled in the course to identify whether these faculty members were minorities and what their gender was. Because what’s important isn’t necessarily how the faculty member identifies – as a minority or not – as whether the students perceive them as minority… and even after controlling for all of that… controlling for everything… when everything was identical, I thought there was no way I was going to get any statistically significant results, and yet we did. So, we controlled even for the final grades in the course… even we controlled for how well students performed… the only significant predictor for those ordinal evaluation scores with whether the professor was a woman and whether the professor was a minority. We didn’t see accented English come up as significant, probably because it’s an online course. They’re just not listening to the faculty members more often than these introductory welcome videos. But we did when we asked students to identify the gender and the race of the professor’s based on a picture. We asked the student: “Do you think you would have a difficult time understanding this person’s English” and we found that Asian faculty members, without even hearing them speak, students very much thought that they would have difficulty understanding their English… and then we have a faculty member here who… blonde hair and blue eyes… but speaks with a very thick Hispanic accent, and the students who looked at his picture… none of them perceived that they would have a difficult time understanding his English. So, I think there’s a lot of biases on the part of students just based on what their professors look like and how they sound.

John: Can you think of any ways of redesigning course evaluations to get around this? Would it help if the evaluations were focused more on the specific activities that were done in class… in terms of providing frequent feedback… in terms of giving students multiple opportunities for expression? My guess is it prob ably wouldn’t make much of a difference.

Kristina: I think, as of now, the way our course evaluations here at Texas Tech University look is that they’re asked to rate their professors you know in a 1 to 5 on things like “did the professor provide adequate feedback?” and “was this course a valuable experience?” and” “was the professor effective?” and that gives an opportunity for a lot of: “I’m going to give five to this professor, but only fours to this professor” even when the behaviors in class might not have been dramatically different. Now this is also speculation, but maybe if there was more of a “yes/no,” “Did the professor provide feedback?” “Were there different kinds of assignment?” “Was class valuable?” Maybe that would be a way to get rid of those small nuances. Like I said, when we did our study, the difference was .4 out of a five-point scale, and so these differences aren’t maybe substantively hugely different. Maybe it’s a difference between you know a 4 and a 4.5. Substantively, that’s not very different. So, maybe if we offered students just a “yes/no,” “Were these basic expectations satisfied?” maybe that could help and that might be something that’s worth exploring. I definitely think that either removing the comment section altogether, or providing some very specific how-to guidelines on what kinds of comments should be provided. I think that that’s the way to address these open-ended say whatever you want… “are you mad? “…are you trying to ask your professor out? …trying to eliminate those comments would be the best way to make evaluations more useful.

John: You’re also working on a study of women in academic leadership. What are you finding?

Kristina: A very famous political science study, done by a woman named Jennifer Lawless, looked at the reasons why women choose not to run for office. So we know that women are underrepresented in elective office, you know the country’s over half women but, we’re definitely not seeing half of our legislative bodies filled with women. What the Lawless and Fox study finds, is not that women can’t win when they run, it’s just that women don’t perceive that they’re qualified to run at all. So, when you ask men, do you think you’re qualified to run for office, men are a lot more likely to say: “oh yeah, totally… I could I could be a Congressman,” whereas women, even with the same kind of qualifications, they’re less likely to perceive themselves as qualified. So, what my co-author Jared Perkins at Cal State Long Beach and I decided to do, is see whether this phenomenon is the same in higher education leadership positions. So one thing that’s often stated is that the best way to ensure that women are treated equally in higher education, is just to put more women in positions of leadership… that we can do all the Title 9 trainings in the world, but until more women are in positions of leadership, we’re not going to see real change…. and we wanted to find out why we haven’t seen that. So you know 56 percent of college students right now are women, but when we’re looking at R1 institutions only about 25% of those university presidents are women, and then the numbers can definitely get worse depending on what subset of universities you’re looking at. We did a very small pilot study of three different institutions across the country. We looked at an R1 and R2 and an R3 Carnegie classification institution. Our pilot study was small, but our initial findings seem to show that that women are not being encouraged to hold these offices at the same rate as men are. So what we saw was that… we asked men “have you ever held an administrative position at a university?” About 60% of the men reported that they had, and about 27% of women reported that they had, and we also asked “Did you ever apply for an administrative position? …and only 21% of the men said that they had applied for an administrative position, while 27% of women said they had applied. Off course it could be that they misunderstood the question… that maybe they thought we meant “Did you apply and not get it?” but we also think that there may be something to explore when it comes to when women apply for these positions they get them. There are qualified women ready to go and ready to apply, but men may be asked to take positions… encouraged to take positions… or appointed to positions where there might be opportunities to say: “There’s a qualified woman. Let’s ask her to serve in this position instead.”

John: That’s not an uncommon result. I know in studies and labor markets starting salaries are often comparable, but women are less likely to be promoted and some studies have suggested that one factor is that women are less likely to apply for higher level positions. Actually, there’s even more evidence that suggests that women are less likely to apply for promotions, higher pay, etc. and that may be at least a common factor that we’re seeing in lots of areas.

Kristina: Absolutely. I definitely think that University administrations need to place a priority on encouraging women to apply for grants, awards, positions, and leadership because there are plenty of qualified women out there, we just need to make sure that they’re actively being encouraged to take these roles.

Rebecca: Which leads us nicely to the motherhood penalty. I know you’re also doing some research in this area about being a mother and in academia, can you talk a little bit about how this impacts some of the other things that you’ve been looking at?

Kristina: Absolutely. The idea to study the motherhood penalty in academia stemmed from reading some of those “Rate My Professor” comments. Because at my institution, we didn’t have a maternity leave policy in place… so I came back to work after two weeks of having my child and I brought him to work. So my department was supportive. I just brought him into my office and worked with the baby for the whole semester… and it was difficult, it was definitely a challenge to try and do any kind of work while a baby is, in the sling, in front of your chest… but one of my “Rate My Professor” evaluations from the semester that I had my son, mentioned that I was on pregnancy leave the whole semester and I was no help. And so this offended me to my core, having been a woman who took two weeks of maternity leave before coming back to work… because I didn’t… I wasn’t on maternity leave the whole semester, and in addition… if I had been, what kind of reason is that to ding a professor on her evaluation? Like she birthed a human child and is having to take care of that child… that shouldn’t ever be something that comes up in a student comment about whether the professor was effective or not.

So what we want to look at are just the ways in which women are penalized when they have children. Even just anecdotally, and our data collection is very much in its initial stages on this project… but as we think through our anecdotal experiences, when department schedule meetings at 3:30 or 4:00 p.m., if women are acting as the primary caregiver for their children (which they often are) this disadvantages them because they’re not able to be there. You have to choose whether to meet your child at the bus stop or to go to this department meeting… or networking opportunities, are often difficult for women to attend if they’re responsible for childcare. Conferences have explored the idea of having childcare available for parents because, a lot of times, new mothers are just not able to attend these academic conferences… which are an important part of networking and most disciplines… because they can’t get childcare. So at the Southern Political Science Association meeting that I went to in January, a woman brought her baby and was on a panel with her baby. So, I think we’re making good strides in making sure mothers are included, but what we want to explore is whether student evaluations will reflect differences in whether they know that their professor is a mother or whether they don’t. So, how would students react if in one class I just said I was cancelling office hours without giving a reason and then in another class, I said it was because I had a sick child or I had to take my child to an event. That’s kind of where we’re going with this project and we really, really hope to dig into what’s the relationship between the motherhood penalty and student evaluation.

Rebecca: Given all of the research that you’re doing and the things that you’re looking at, how do we start to change the culture of institutions?

Kristina: Well, I’m thinking that we’re on the right direction. Like I said, I see a lot more opportunities at conferences for childcare and for women to just bring their children. I see a lot of men who are standing up and saying, “hey, I can help, I’m in a position of power and I can help with this” and what, you know, without our male allies helping us, I mean, men had to give women the right to vote, we didn’t just get that on our own. So, we really count on allies to put us forward for awards. One thing, I think, that’s an important distinction that I learned about from a keynote speaker is the difference between mentoring and sponsoring. So, mentoring is a great activity, we all need a mentor, someone we can go to for advice, someone we can ask for help, someone who can guide us through our professional lives. But what women really need is a sponsor, someone who will publicly advocate for a woman whether that’s putting her in front of the Dean and saying, “Look at the great work she’s doing” or whether it’s writing a letter of recommendation saying, “This woman needs to be considered for this promotion or for this grant.” Sponsorship, I think, is the next step in making sure that women are supported. A mentor might advise a woman on whether she should miss that meeting or that networking opportunity to be with her child. A sponsor would email and say, “we need to change the time because the women in our department can’t come. because they have events that they need to be with their children.”

John: A similar article appeared in a Chronicle post in late February or maybe the first week in March by Michelle Miller where she made a slightly different version. Mentoring is really good… and we need mentors, but she suggested that sometimes having fans would be helpful. People who would just help share information… so when you do something good… people who will post it on social networks and share it widely in addition to the usual mentoring role. So, having those types of connections can be helpful and certainly sponsors would be a good way of doing this.

Rebecca: I’ve been seeing the same kind of research and strategies being promoted in the tech industry, which I’m a part of as well. So, I think it’s a strategy that a lot of women are advocating for and their allies are advocating for it as well. So hopefully we’ll see more of that.

Kristina: I think the idea of fans and someone to just share your work is hugely important. I have to put in a plug for the amazing group: “Women Also Know Stuff.”

Rebecca: Awesome.

Kristina: It’s a political science specific website, but there are many offshoots in many different disciplines and really it’s just the chance that, if you say, “I need to figure out somebody who knows something about international trade wars.” Well, you can go to this website and find a woman who knows something about this, so that you’re not stuck with the same faces… the same male faces,,, that are telling you about current events. So “Women Also Know Stuff” is a great place. They share all kinds of research and they just provide a place that you can look for an expert in a field who is a woman. I promise they exist.

Rebecca: I’ve been using Twitter to do some of the same kind of collection. There might be topics that I teach that I’m not necessarily familiar with… scholars who are not white men… And so, put a plug out like, “hey, I need information on this particular subject. Who are the people you turn to who are not?”

John: You just did that not too long ago.

Rebecca: Yeah, and it, you know, I got a giant list and it was really helpful.

John: One thing that may help alleviate this a little bit is now we have so many better tools for virtual participation. So, if there are events in departments that have to be later, there’s no reason why someone couldn’t participate virtually from home while taking care of a child, whether it’s a male or female. Disproportionately, it tends to be females doing that but you could be sitting there with a child on your lap, participating in the meeting, turning a microphone on and off, depending on the noise level at home, and that should help… or at least potentially, it offers a capability of reducing this.

Rebecca: I know someone who did a workshop like that this winter.

John: Just this winter, Rebecca was doing some workshops where she had to be home with her daughter who wasn’t feeling well and she still came in, virtually, and gave the workshops and it worked really well.

Kristina: Yeah, I definitely think that that’s a great way to make sure that that everyone’s included, whether it’s because they’re mothers or fathers or just unavailable… and I think that’s where we look to sponsors… the department chairs… department leadership to say, “This is how we’re going to include this person in thid activity” rather than it being left up to the woman herself to try and find a way to be included. We need to look to put people in positions of leadership to actively find ways to include people regardless of their family status or their gender.

Rebecca: This has been a really great discussion, some really helpful resources and great information to share with our colleagues across all the places that…

John: …everywhere that people happen to listen… and you’re doing some fascinating research and I’m going to keep following it as these things come out.

Rebecca: …and, of course, we always end asking what are you gonna do next. You have so many things already on the agenda but what’s next?

Kristina: So next up on my list is an article that’s currently under review that looks at the “leaky pipeline.” So the leaky pipeline is a phenomenon in which women, like we were saying, start at the same position as men do, but then they fall out of the tenure track, they fall out of academia more generally… they end up with lower salaries and lower position. So, we’re looking at what factors, what administrative responsibilities, might lead women to fall off the tenure track. We already know that women do a lot more service work and a lot more committee work than men do, so we’re specifically looking at some other administrative responsibilities that we think might contribute to that leaky pipeline.

Rebecca: Sounds great. Keep everyone posted when that comes out and we’ll share it out when it’s available.

Kristina: Thanks.

John: …and we will share in the show notes links to papers that you published and working papers and anything else you’d like us to share related to this. Okay, well thank you.

Kristina: Thank you.
[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.

14. Microcredentials

In this episode, we discuss the growing role of microcredentials in higher education with Jill Pippin (Dean of Extended Learning at SUNY-Oswego), Nan Travers (Director of the Center for Leadership in Credentialling Learning at Empire State College), and Ken Lindblom (Dean of the School of Professional Development at the State University of New York at Stony Brook). Jill, Nan, and Ken are members of a State University of New York task force on microcredentials.

Transcript

Rebecca: Our guests today are: Jill Pippin, the Dean of Extended Learning at SUNY-Oswego; Nan Travers, the Director of the Center for Leadership and Credentialing Learning at Empire State College; and Ken Lindblom, the Dean of the School of Professional Development at the State University of New York at Stony Brook.

John: Welcome, everyone!

Nan: Thank you. Hello.

Jill: Thank you.

Ken: It’s good to be here.

John: Our teas today are:

Rebecca: Jasmine green tea.

John: Jill?

Jill: I actually don’t drink tea.

John: Oh… here we go again…. Okay, Nan?

Nan: I’m drinking Celestial Seasons Bengal Spice.

John: …and, Ken?

Ken: My tea today is coffee.
[laughter]

John: …we get a lot of that…. Ok.
…and I have black raspberry green tea from Tea Republic.
So, today we’re going to be talking about microcredentials. Would someone like to tell us a little bit about what microcredentials are?

Ken: Sure, I’d be happy to tell you a bit about what microcredentials are. So there are traditional microcredentials that most people know all about, such as certificates, minors, or just either credit or non-credit certificates. So they’re pieces of larger degrees, but there are now new digital microcredentials that are having a bigger impact on the field, and that internet technology has allowed us to take more advantage of. So there are internet certificates and there are also digital badges, which are icons that can be put on a LinkedIn resume or shared through somebody’s website or on a Twitter feed… and they indicate that the earner of the microcredential has developed particular skills or abilities that will be useful in the workplace.

Nan: …and just to add to what Ken has said, with the open digital badges that are out there, they actually hold on to all of the information around the assessed learning…. the different competencies that an individual has, and the ways in which they’ve assessed it. So if they’re used, let’s say, in the workplace, an employer could actually click into the badge and be able to see exactly how the person has been assessed… which gives a lot of information that a traditional transcript does not give, because it does have that background information in there.

John: Who can issue microcredentials? or who does issue microcredentials?

Jill: …really industry, colleges, various and sundry types of organizations.

Ken: Yeah, in fact, Jill’s right. There’s no real regulation of microcredentials right now. So they can be given by any group that simply creates a microcredential and awards it to someone… and then they say what it is. So the microcredential’s value is really based on the reputation of the issuer.
Honestly, universities and colleges are pretty slow to get to this kind of technology, as we often are. So it’s new for us, but there are private companies that have been issuing them, and there been individual instructors at the college, and especially at the k-12 level, who have been using badge technology to motivate and to assess student work for quite a few years… but for the university level, this is exciting new territory that we’re really jumping into now.

Jill: Yeah, microcredentials are shorter… they’re more flexible…. and they’re very skill based… and so they’re new for colleges, I think in a lot of ways….. maybe not so much for our non-credit side of the house… those that have been doing training programs and things are very practical… skill-based pieces… but in terms of having ladders to credit and having credit courses seen through the lens of a smaller chunk of time, and of topic area, and focus… I think that’s the real change or the real difference in micro-credentialing than from a traditional environment…

Nan: …and what’s really important here is that the demand for these really, in many ways, is coming from industry where they really need better signals as to what people know and what they can do, and as Jill just mentioned, that they’re very skills based. This enables somebody to be able to get a good idea about what a potential employee is able to do. So the demand for microcredentials is really increasing, as industry are using them more and more and there’s many different groups that are really focused on using either the microcredentials, or specifically the badges (which is really a type of microcredential). There are some projects right now where there are whole cities that have come together and have been developing microcredentials and badging systems to make sure that all people in the community have the ability to show those skills as they go for employment. There are also some companies that are starting to come out. For example, there’s a company called “Degreed,” which is degreed.com. It’s a company that enables people to get their skills assessed and microcredentialed, and at the same time working with companies… there’s some big companies such as Bank of America… there’s many other ones that are on their website listed… and they work with the companies and identify the different skills that people need… and then credential the people who are trying to apply with those…. so that there’s a real matching. It becomes a competency-based employment matching system in many ways.

Ken: Some of the ways that badges have been useful are exactly what Nan and Jill are saying, that it’s come from the employers who are asking for specific information about what students will come to them with. We are also able to develop badges in concert with specific employers, if there’s particular training or education or sets of skills or abilities that they’d like their applicants to have… but there’s also another great advantage to microcredentials, particularly badges, that allow us to show the in-depth learning that goes on in classes. My other hat, other than Dean, is that I’m a Professor of English, and so in a lot of humanities courses the direct connection to skills isn’t as obvious to people as it is in an area say like teacher education. So what we can do with a badge is we can point out the specific skills that students are developing in a class on rhetorical theory, or on Shakespearean plays, or whatever. We can point out the analytical learning that they’re doing, the kind of critical thinking, the kind of communicative writing, so that those courses translate into the kind of skills that people are looking for… and of course, our students are picking those things up, but now we can make it more visible as a result of the technology of digital badges.

Jill: It’s an exciting time in higher education. I mean it really is, in terms of microcredentials, because higher ed has the opportunity to validate those credentials. A lot of them, as we said before, have been out there… non-credit skill-based smaller chunks of learning… but the idea of having them all kind of on the same playing field… and almost apples-to-apples in terms of validating learning outcomes… and making sure they’re part of a longer pathway toward higher education. It’s really exciting.

John: When someone sees a transcript and sees English 101 or English 373 or Eco 101, it doesn’t really tell the employer that much about what the students actually learned, but the microcredentials provide information about specific skills that would be relevant. Is there much evidence of the impact this has on employability or in terms of career placement?

Nan: There has been some work that is being done on that, and as I mentioned there are some companies that are even starting to get in the field because there is such a high demand for companies to be able to do competency-based hiring. There’s an initiative that the Lumina Foundation has been funding called Connecting Credentials and, in that initiative, they’ve been looking at microcredentials as a piece of that. That initiative has brought together many different businesses, organizations, and higher education together at the table to really discuss ways in which credentials can better serve all of those different sectors… and so some of the work that they have been working on and that can be viewed at connectingcredentials.org has really been looking at some of the impact of microcredentials on employability.

John: Based on that, I would think, that when colleges are coming up with microcredential programs, it might be useful to work with businesses and to get feedback from businesses on what types of skills they’re looking for… for guidance or some help in designing microcredential programs?

Jill: Absolutely.

Ken: Yeah. I can talk a little bit about some experience we’ve had at Stony Brook on that. We’ve been working with an organization called FREE which is Family Residences and Essential Enterprises. They’re a large agency that supports students, children, and adults with disabilities… and we worked with them to create several badges that align directly with their national standards and the certification needs of their employees. So now we’ve got a system where one of the things that their employees need is food literacy. If they’re running a house for people with disabilities, people who need assistance, they have to be able to demonstrate that they’re able to produce healthy nutritious meals… and so once they’ve gone through this training, which is specifically aligned with their curriculum, having earned the badge will demonstrate that the employee has developed that set of skills. We’ve also got one for them on leadership among their managers and we’re developing more… and the fact that we’ve developed that with the employer… and now the employer is actually contracting with us to deliver that instruction to their employees. We’ve done really well and we’ve issued well over a hundred badges to that agency in just about a year.

John: Excellent.

Nan: There’s also, as we think about it from an employability perspective… there is also another important area that’s happening with the microcredentials and the badges in higher education…is to really be looking at some of those more liberal arts kinds of skills: being able to be a good communicator… to have good resiliency… these are also very important pieces that go into being a good worker… and so there are many institutions as we look across the United States that are really looking at some of these broader skills. There’s also some work that’s being done on the student services side which is really looking at how students have been engaging and being involved within the institution. So, there are these other pieces that also help to build that whole person… how somebody really is involved in higher education… what they know… what they can do… and the kinds of different volunteer pieces… as well as the different kinds of things that they have engaged while they are they are there: working in teams, doing different projects. So, there’s lots of different ways of using those badges. There are also some institutions who are using these badges as a beginning point for students. For some people, it’s scary to start at higher ed again, and to be able to take a little bit of a program that’s a smaller program that actually has a credential at the end of it, is a really motivating thing. Students come away saying: “Well I did that. I can do more…” and so it becomes a really good recruitment tool… but it also is a real good student support tool in order to help people start the path of education as well.

Ken: …and you know, Nan, that’s an important point too… and it works the other way for people who are in, let’s say a master’s degree program…. they don’t not learn anything new until the very end when they’re issued the degree… they’re actually building skills and developing abilities all along the way. So, what the digital badge or a microcredential can do is make visible the learning that they’re doing along the way. So after three or four courses, they’ve earned a credential that demonstrates that value. So they don’t have to wait until they finish 10 or 11 courses.

John: So, it lets them have small goals along the way, and they’re able to achieve success, and perhaps help build a growth mindset for those students who might not have done that otherwise.

Ken: Yes.

Nan: Yes.

Ken: Well put, John.

John: How does this integrate with traditional courses? Are there badges that are offered… or a given badge might be offered by multiple courses? or do individual courses offer multiple badges or microcredentials?

Ken: It can go in lots of different ways. There are instructors who build badging into their own classes. Those aren’t really microcredentials the way we’re talking about them. We’re talking about microcredentials that are somewhere between a course and a degree. So, at Stony Brook, for example, we have what we call a university badge program, and in order for a University badge to exist, it must require between 2 and 4 4-credit courses. So a total of 6 to 12 credits, that’s the point at which students can earn a university badge at Stony Brook University. Those courses work together. So, for example, we have a badge in design thinking, and in order to earn that badge students must get at least a “B” on two courses that we have on design thinking. We also have a badge in employer-employee relations within our Human Resources program… and in order to earn that badge, there are three specific classes that students have to take and earn at least a B on each of those classes.

Nan: So, there is also another approach in terms of thinking about how the microcredentials can intersect and kind of interface with the traditional credentials, the traditional degrees, and that’s through different forms of prior learning assessment. So, what we also see is that students come with licenses, certifications, different kinds of these smaller credentials that represent verifiable college-level learning… and through either an individualized portfolio assessment process or, at our institution at SUNY Empire State College, we also have a process called professional learning evaluations… where we go in and evaluate training, licenses, certifications, and those are evaluated for college credit . Those are then also integrated within the curriculum, and treated as… really transfer credit… they’re advanced standing credit. So, students also have the ability to bring knowledge with them through the microcredentials… they’ve been verified by another organization, and then we re-verify that learning at a college level to make sure that it is valid learning for a degree… and then integrate it within the curriculum.

John: In Ken’s case, it sounds like the microcredential is more than a course, in other cases it might be roughly equivalent to a course… or might it sometimes be less than a course? Where a course might provide individuals with specific skills, some which they might have in other courses? or is that less common?

Jill: You’re right, there’s a spectrum. So, for instance if you look at it from a traditional standpoint, a technology course might already have an embedded microcredential in the form of OSHA training, for example. That’s a microcredential, in that particular example, and so we have the opportunity to look at the skill based smaller chunks that may be very specific to an occupation or employers need for someone to have those skills and be able to put some framework around it so that it can be understood and communicated to an employer.

Ken: One of the exciting things about badging and microcredentials right now which Jill alluded to earlier is that there really isn’t any regulation regarding them yet. So when you say a college degree, that has a standardized meaning but when you say a microcredential or a digital badge, there’s no standardized meaning whatsoever, so what we’re doing is we’re creating different versions of microcredentials and the meaning of them is dependent on that specific situation. So one of the things that’s exciting about being a University in a College is we can really bring academic rigor to these no matter how many skills and what level of learning of the digital badge represents… you know because it comes from a university particularly a SUNY it’s going to be a high quality badge. But it’s incumbent upon the one who’s reading the badge to understand what that badge actually means, and depending where it comes from, depending on the size of the badge, and what the number of skills and abilities aligned to it are, the badge means different things and that’s why it’s so important that the badge includes the metadata – all that in depth and formation that you get when you click on the digital badge icon and all of that information pops up.

Nan: In addition, nationally the IMS global learning community has been developing standards and hopefully there’ll be national standards around the data, how that’s reported, and being able to allow people to really understand and compare the attributes of the criteria of how it’s been assessed, and so there’s a great deal of work that’s being done at a national level to really be thinking about how we can have some good standardization and guidelines around what we mean by certain things in the digital badging. So I think that’s something to pay attention to in terms of what’s coming about.

Ken: Yes it’s exciting space before the standardization has been done, because there’s a lot of innovative potential there, but as we standardize there’ll be more comparability and that’ll be easier to do. So, we may lose some of that innovation later but we’ll just have to see. It’s very interesting to be at the beginning of this process like this because degrees were really kind of finalized at the end of the 19th century, and now at the beginning of the 21st century we’re reinventing that kind of work.

John: Now earlier, it was suggested that other groups have been creating micro-credentials in industry and private firms. One of the advantages, I would think, perhaps that colleges and universities would have is a reputation for certifying skills. Does a reputation of colleges perhaps in universities give us a bit of an edge in creating microcredentials compared to industry?
JILL : One would hope, however there are examples of all sorts of industry entities out there that are offering microcredentials – think of the coding academies that are prolific and they’re very skill based, very specific to an industry, in the industries needs the employers understand what that outcome is from that training and they’re able to therefore value it, and the employee is able to communicate it very effectively. But where I think the colleges have an opportunity and universities have an opportunity to really shine here is that this is where we have the experts, we have people who are very well-versed and researched in their area of scholarship, and they’re able to really look at curriculum and validate it, and make sure that it is expressed in terms of college-level learning outcomes.

Nan: In addition, I think that higher ed has the opportunity to really integrate the industry certifications with curriculum and the stacking process bringing in those microcredentials from industry or having them right within the higher ed curriculum and then being able to roll that in and build it into the curriculum, so that a degree, I can imagine, as we evolve higher education over the next decade or so, that people as they graduate… they’re graduating with a college degree, they’re graduating also with microcredentials, and together they’re able to really indicate what a student knows and what a student can do which really can help the student a great deal more than when it’s just a degree that doesn’t really spell out what some of the details about what somebody knows.

Rebecca: I’m curious whether or not there’s any conversations happening with accreditation organizations about micro credentialing and how they might be involved in the conversation.

Nan: So at this point there are conversations that are happening at the accreditation level and for example, every regional accreditation agency has policy around the assessment of learning. Sometimes specifically around prior learning assessment, sometimes around transfer credit, which within those policies they’re really starting to look at how those learning pieces can come in. When it’s on the for-credit side, then there really needs to be a demonstration by the institution that those microcredentials are meeting the same academic standards as the courses are also. So using the accreditation standards and making sure that all policies and procedures are of the same quality and integrity ensures that it all fits together.

Ken: I think it’s not only an opportunity for universities that we’re developing micro-credentials, but I think it’s our responsibility to do so, because the idea of digital badges for example was popularized in the corporate sector before universities got on board and they ran the gamut in terms of quality and value and frankly there are some predatory institutions that award badges that may not have much value at all to students, and yet they can be quite costly. So I think it was very incumbent upon the university to create valuable microcredentials that would have real academic rigor and support behind them. In addition to that, some of these institutions were also using their badge programs to undercut the value of the degree and say “Well, you don’t actually need a college degree with all that fluff, you just need to get the skills training that you’ll get from a badge.” And we know that a college degree delivers far more than just a set of discrete skills, it gives better ways of seeing the fuller world, of understanding the integration of knowledge, of being able to employ social skills along with technical ability, and digital badges at the university level allow us to make those connections more visible. But it also can help us prevent attacks against the university, which are done purely from profiteering perspective sometimes.

Jill: We can provide some validity and some academic integrity to the smaller microcredential world, then I think higher ed as Ken says has a responsibility to do so.

Nan: It also shows a shift in some of the role of higher education where it becomes even more important that we take the lead in helping to integrate people’s skills and their knowledge and then how that relates to work and life. In many ways, the older higher ed… we had a much more of a role of just delivering information and making sure people had information. Now I think our role has really shifted, where we need to take the leadership in the integration of knowledge and learning.

Rebecca: I’m hearing a lot of conversation focusing on skills and lower levels of the Bloom’s taxonomy, so it would be interesting to hear of examples at higher levels of thinking and working.

Ken: Well, Bloom’s taxonomy actually is a taxonomy of skills and domains of knowledge and abilities so that there are certainly skills involved with synthesis and evaluation, which are at the top of Bloom’s taxonomy. So digital badges can work with that. Digital badges… the skills can involve being able to examine a great deal of knowledge and solve specific problems in an industry, and these are the highest levels of application of knowledge and learning.

Nan: In higher ed they’re also being looked at both at the undergraduate and graduate level, and so it’s not just that entry-level piece. Again, we keep talking about licenses and certifications as a type of microcredential, and there are many out there that you cannot acquire until you have reached certain levels of knowledge and abilities. I know we have focused a great deal of this conversation in terms of being skills-based, but in industry they’re really talking about it more as competencies, and the definition of competencies is what you know and what you can do, so it’s both knowledge and skill space, it is not just skill space.

Ken: In fact, one of the issues that some faculty have with microcredentials, particularly digital badges, is that they have a sense that it’s focused too heavily on utilitarian skill, and not focused heavily enough on the larger and the higher levels of learning that Rebecca is talking about. So I think Nan’s bringing in the idea of competency-based learning is really very helpful that way.

John: So, basically those skills could be at any level.
What are some of the other concerns that faculty might have that might lead to some resistance to adopting microcredentials at a given institution?

Nan: So one of the areas that they may talk about is the concern of the integrity. The academic integrity of the microcredential, or of the badge. And what’s important is that each institution really look at their own process for reviewing microcredentials and improving them, especially if they are on the credit side and they’re going to be integrated within the curriculum. So they need to follow the same standards that any course will follow, and that should really help relieve that concern about academic integrity.

Ken: Yeah, in fact the SUNY microcredentials group, which all of us on this podcast are involved with, specifically points out that faculty governance has to be heavily involved in the creation of any digital badge or micro credential program. That’s the whole point of bringing the university level to this. Is that faculty governance that academic input is going to be behind every microcredential that we create. One of the other things that my faculty colleagues have had trouble with, is the very name of digital badges, and they think it sounds a little silly, a little juvenile. They always say, “oh, well, this is just Boy Scouts and Girl Scouts” and so to them it can feel a little silly. It actually doesn’t come from Boy Scouts and Girl Scouts. Digital badges come from gamification and motivational psychologists looked at why people were willing to do so many rote tasks in an online game. Even though they weren’t being paid to do so, and didn’t seem very exciting on its own and what they found is that people were willing to do that because they would earn a badge or they would level up or earn special privileges along the way, and that was very motivating for people. That’s where this technology really came from and then we built more academic rigor into it. The metaphor that I like to use with my faculty colleagues, which was suggested to me by one of my English department colleagues, Peter Manning. He pointed out that in the medieval period in England archers would learn different skills and when they developed a new skill, they would be given a feather of a different color, and then that feather would be put in the cap. So literally a badge is like a feather in the cap, and when you see somebody coming with 8 or 10 feathers of different colors, this is going to be a formidable adversary. Just like people with a few did badges from the SUNY system, they’re gonna be formidable employees.

Jill: The other thing I like to jump in and say too is – the Girl Scout in the Boy Scout badging system if you really know what the badges represent – you know that there are very strident rules learning outcomes and so on involved in attaining the badge. The badge is a way of just demarcing that they attained it. The quality is inherent in the group that’s setting up the equation by which you earn the badge.

John: So it’s still certifying skill.

Jill: It’s still certifying something and again the institution has the ability to determine what that something is, and to make sure that it is of quality.

John: Now one other thing I was thinking is that if an institution instituted a badging system, it might actually force faculty to reflect a little bit on what types of skills they’re teaching in the class, and that could be an interesting part of a curriculum redesign process in a department, because we haven’t always used backwards design where we thought about our learning objectives. Quite often faculty will say, I’d like to teach a course in this because it’s really interesting to me, but perhaps more focus on skills development in our regular curriculum would be a useful process in general.

Jill: I agree.

Ken: I think that’s a great idea, John. We haven’t used the badging system in my school that way yet, but I think it’s a great idea and honestly there are faculty who bristle at the notion that their teaching skills, and digital badging really strikes at the heart of that, in my perspective, elitist attitude about education. We do want to open up students Minds, we do want to expose them to more of the aesthetic pleasures of life, but we also want to help students improve their own lives in material ways as well, and badging can help us make visible, and strengthen the ways in which we do that in higher education. I think we should be very proud of that.

Nan: So again one of the reasons I like to use the word competency, is because it brings the knowledge and skills together, and we’re actually talking about skills as though they are isolated away from the knowledge pieces, and you can’t have skill without knowledge. To develop good knowledge, you need certain skills, and so I think it’s important to really think about this not as two different things that are separated and somehow we all of a sudden are going to be just skills based, but much rather that we’re developing people’s competencies to be highly educated people.

Jill: Very symbiotic really, and I think this is also where you get at the idea of how can non-credit and credit work together. If you’re thinking about them, in terms of the outcomes and developing your class in that way, and if one of those by itself would be something that’s non-credit, and then if you build them all together then you get a course. Or then your couple of graduate courses together, then you get a credential that is something on the way to a graduate degree.

John: This brings us to the concept of stackable credentials or some microcredentials designed to be stackable to build towards higher level credentials.

Ken: Really a micro-credentialing systems, should always be stackable. That’s one of the bedrocks of the whole idea of it. So it’s not required that a student go beyond one microcredential, but microcredentials should always be applicable to some larger credential of some sort. So, for example, all of the university badges at Stony Brook University stacked toward a master’s degree. And in fact we’ve tried to create what’s called a constellation of badges, so that students can wind their way to a master’s degree by using badges… or on their way to a master’s degree they can pick particular badges to help highlight specialties among electives that they can choose. So it’s a way for them to say, yes I have a Master of Arts in Liberal Studies, and as part of that I have a particular specialization in financial literacy, or in teacher leadership, or an area such as that. But yeah, microcredentials should always be able to stack to something larger. And if we do it right, eventually we’ll have a system that works really from the first… from high school to really into retirement, because there can be lifelong learning. That’s involved in microcredentials as well. There’s always more to learn, so there should always be new microcredentials to earn.

Nan: I totally agree with Ken and if we provide different microcredentials and don’t provide how they do stack and build a pathway, then we really have not helped our students. In many ways we have left it, traditionally, historically, left it up to the individual to figure out how their bits and pieces of learning all fit together and we kind of expect that they’ve got the ability to kind of put it all together and apply it in many different ways, and I think that the role that microcredentials is really playing here, is a way of helping us start to talk about these discrete pieces, and then also how they build together and stack, which gives the person the ability to think about how it fits into the whole. I think what microcredentials is doing is opening up higher education, in a way to really be thinking about how to better serve our students, and give them those abilities to take what they know, package it in different ways, be able to apply it in many different ways, and be able to build that lifelong goals, and seeing how it all fits together.

Rebecca: Just thought I’d follow up a little bit. I think a lot of examples that we see are often in tech or in business and those are the ones that seem very concrete to many of us, but for those of you that have instituted some of these microcredentials already, how does it fit into a liberal arts context, which might not be so obvious to some folks?

Nan: There’s actually quite a few examples of microcredentials and badges that are more on the liberal arts side. There’s been some initiatives across the United States where different institutions have been developing, what we can think of as the 21st century skills: communication, problem-solving, applying learning, being resilient. These are some of the kinds of badges that are starting to really evolve out of higher education, which really brings in those different pieces of a liberal arts education, and being able to lift that up and give the students the ability to say, “I’ve got some good problem solving skills and here’s some examples and I can show it through this badge.” When we look at the research in terms of what employers need for the 21st century employee, we’re really looking at very strong liberal arts education that is then integrated into a workplace situation. So I’m seeing a lot more badges being grown in that liberal arts arena.

Ken: Yeah, at Stony Brook University, we have a number of badges that are in the liberal arts. So for example, we have a badge in diverse literatures. So there may be people who wish to earn that just for personal enrichment, but it’s something that might be really interesting to English teachers as well, because by earning a badge in diverse literatures, which requires a minimum of three classes in different areas, different nationalities of literature, teachers will be able to go on to select pieces of literature more appropriate for diverse audiences. They’ll be able to explore greater world literatures because of the background that they’ve had in exploring different literatures in their classes. So, that’s just one example, but of our about 30 badges, about third of them are in those humanities areas. That said, I will acknowledge that they are not anywhere near as popular as the more business oriented and professional oriented badges, where the link to skills simply seems more obvious. So I think that the liberal studie… the liberal arts… the humanities badges.. the connection is not quite as clear and so there’s still a lot of potential there.

Jill: It’s so important for the employers and for the students themselves, but I think almost most importantly the employers to understand what that means. They have to understand you have a microcredential or a badge and problem solving. They have to have some kind of trust, that it’s truly a skill that equates to their workplace situation, and that’s where the online systems where you can actually delve into what’s behind the my credential, is so important. You can really sit there and look at it, and verify that what the competencies and the skills that the individual has attained through earning this badge.

John: So the definition in the metadata is really important and establishing exactly what sort. Now that brings us to another question. At this point each institution that’s using badges is developing its own set of badges and competencies. Has there been any effort at trying to get some standardization and portability of this across institutions or is it too early for that, or do you see it going in that direction at some point?

Ken: John, it certainly hasn’t happened yet, but I do know that the SUNY Board of Trustees at their last meeting started to consider developing working groups to do just what you’re saying. So it’s not so much to standardize what badges are, but rather to standardize reporting and explore ways to help badge earners to explain and demonstrate their badges to employers, and to other schools more easily. So I know that’s where the SUNY system is headed.

Nan: And if it is for credit, then it falls within transfer credit anyway. So really, if it has gone through the appropriate academic curriculum development processes, the governance processes, then it has the same rigor and therefore is very transferable through our policies on transfer. So really what we need to be doing is doing some good work around the non-credit side,…that really helps the transfer of non-credit learning.

Jill: And one way we can do that is by reinvigorating and breathing new life into a 1973 policy that SUNY has on the books for the awarding of CEUs )continuing education units). It has a recommendation in a process by which campuses can take non-credit curriculum and send it up through a faculty expert and it has a certain guideline about how do you come up with an approval process and how many CEUs could be granted for such work. So, there are some skeleton pieces to how SUNY may codify that moving forward, at this point there is not a rule about how to move forward with non-credit. In fact, SUNY I think trying to be responsive to the emergent nature of this very concept, it has not tried to come in and be too prescriptive yet.

John: On the other hand, when students do receive microcredentials at multiple institutions. Let’s say they start at a community college. They move perhaps to Empire State, maybe they move to a four-year college for university, if they don’t finish and get a degree, they still would have some microcredentials that they could use when they go on the market, because many of them perhaps might use Credly or some other system where they can put it on the LinkedIn profile and they still have that certification, which if they just don’t get the degree it just shows them as not being a degree recipient, which actually seems to hurt people in the job market, but perhaps if they could establish that they have been acquiring skills a long way, maybe it might be helpful for students.

Nan: John, that is a really good point. In many ways, our degrees set up a system where if anyone who steps out of a degree has nothing to show for it and therefore is at a disadvantage, and the microcredentials can help demonstrate their progress, and the competencies that they already have, and so it can play a very important role in people’s lives, when students do need to step in and out of higher education.

John: So where do you see microcredentials going in the future? How do you see this evolving?

Ken: It’s in such an amorphous space right now, it’s hard to imagine what it’s going to undulate into. A big part of what’s happening now is what Nan has talked about. An attempt to try to put some boundaries on this and bring some common definitions to bear on the technology and and the idea of a microcredential, but I think it’s going to still expand. What it’ll do is it’s going to increase partnerships among interesting groups. I think in a lot of these, the universities will be at the center of the partnership, but we’ll be bringing in many more student groups, industry partners, government groups, nonprofits. I think it’s going to increase the amount of communication dramatically, and that’s very exciting because for too many years universities have fulfilled that stereotype of the ivory tower, and this is really breaking that down in some very productive ways.

Nan: And when we look at it from a national perspective, and looking at it to see where some of the direction is going with groups such as IMS global, with connecting credentials and other groups, but what we’re really seeing is the prediction that every student would have a comprehensive digital student record, that they would take with them. It becomes a digital portfolio and the badges would be in their microcredentials, any degrees, they’d have an ability to be able to transport themselves in many different directions, because all of that information would be there, and that digital student record would allow anybody to click in and see the metadata behind it, to know what those competencies that people have, and how it was assessed, what it really means so that there’s a real description of that. That would also enable students to have, again the prediction is that students would be able to transfer from institution to institution. They’ll be able to stack up and build their degrees in ways that would really support the student in their whole life pathway. Ken has just mentioned about partnerships. I think that what we would see is a great deal of partnerships across institutions and with institutions in industry, that really start to build these pathways that people can move along with their comprehensive digital student record.

Ken: Nan, can I ask you a question?

Nan: Yes.

Ken: So a few years ago, there was a lot of talk about they termed co-curricular transcripts, which would be the kind of transcript that would include club membership, informal learning, not credited learning, but it sounds like we may be getting beyond that in a really positive way, and that just the idea of a transcript is becoming a little transformed, so that those other kinds of learning will actually be transcripted in the same digital format. Am I reading that right? Do you think that’s where we’re going?

Nan: Yes, I do think that’s where we’re going, Ken. We’re right at the end of a multi-year, multi-institutional project that Lumina funded, looking at these comprehensive digital student records, that go way beyond… also capturing things like clubs and other kinds of things that students engage, but really, they’re competency-based they start to record those competencies, the data behind the competencies, and when students are in a club or when they’re doing other kinds of activities, the kinds of competencies that they’re gaining from those pieces are also being recorded. So it’s not just: “You are a member of a club, what did you really learn and what can you do because of that?” and so I think that we’re gonna see that evolving more and more over the next decade or so.

Ken: That’s great, thank you.

Jill: If I could add to the question about the role of microcredentials evolving. One of the things that I think is going to be happening, and part of why I’m so excited about microcredentials is, I see this as having a nice connection for the non-credit side of the house of colleges and universities to the credit side, because for so many years, non-credit has been connecting with, and trying to serve business and industry, in ways that really have been limited, and so this really opens up the ability to connect and collaborate with credit expertise within the institution, to be able to create those true pathways, from start to finish from the smallest first step along that pathway all the way through, and that’s really exciting, and I think… and I hope… that’s part of this overall discussion we’re having about micro-credentials moving forward. In a lot of ways this is cyclically. We talked about the CEU policy of 1973. There has been this two sides of the house as they say, as I said a number times today, and really we’re all about education and trying to help people to learn things and be able to apply them to their jobs and their lives and having that connection be that much more seamless and clear. I think that’s one of the most exciting things, from my seat at the table.

John: Well, thank you all for joining us.

Nan: Thank you

John: Look forward to hearing more.

Jill: Thanks for having us.

Ken: Pleasure to be here.

Nan: Take care everybody, bye bye.

Jill: Bye, guys.

Rebecca: Thank you.