New technology is often seen as a threat to learning when first introduced in an educational setting. In this episode, Michelle Miller joins us to examine the question of when to stick with tools and methods that are familiar and when to investigate the possibilities of the future.
Michelle is a Professor of Psychological Sciences and President’s Distinguished Teaching Fellow at Northern Arizona University. She is the author of Minds Online: Teaching Effectively with Technology and Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. Michelle is also a frequent contributor of articles on teaching and learning in higher education to publications such as The Chronicle of Higher Education.
- Miller, Michelle D. (2023). “You’ve Checked Out the New AI Tools. Now What?.” Chronicle of Higher Education. August 17.
- Geerling, W., Mateer, G. D., Wooten, J., & Damodaran, N. (2023). ChatGPT has aced the test of understanding in college economics: Now what?. The American Economist.
- Michelle Miller’s R3 Substack: Research, Resources, Reflection
John: New technology is often seen as a threat to learning when first introduced in an educational setting. In this episode, we examine the question of when to stick with tools and methods that are familiar and when to investigate the possibilities of the future.
John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.
Rebecca: This podcast series is hosted by John Kane, an economist…
John: …and Rebecca Mushtare, a graphic designer…
Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.
John: Our guest today is Michelle Miller. Michelle is a Professor of Psychological Sciences and President’s Distinguished Teaching Fellow at Northern Arizona University. She is the author of Minds Online: Teaching Effectively with Technology and Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. Michelle is also a frequent contributor of articles on teaching and learning in higher education to publications such as The Chronicle of Higher Education. Welcome back, Michelle.
Michelle: Hey, it’s great to be here.
Rebecca: Today’s teas are: ….Michelle, are you drinking tea?
Michelle: I’m actually still sticking with water. So it’s a healthy start so far for the day.
Rebecca: Sounds like a good plan.
John: I have ginger peach black tea today.
Rebecca: And I’ve got some Awake tea. We’re all starting the day [LAUGHTER].
John: So we’ve invited you here to discuss your August 17th Chronicle article on adapting to ChatGPT. You began that article by talking about your experience teaching a research methods course for the first time. Could you share that story? Because I think it’s a nice entree into this.
Michelle: Oh, thank you. I’m glad you agree. You never know when you’re sharing these kinds of personal experiences. But I will say this was triggered by my initial dawning awareness of the recent advances in AI tools, which we’re all talking about now. So initially, like probably a lot of people, I thought, well okay, it’s the latest thing and I don’t know how kind of attentive or concerned I should be about this. And as somebody who does write a lot about technology and education, I have a pretty high bar set for saying, “Oh wow, we actually kind of need to drop everything and look at this,” I’ve heard a lot of like, “Oh, this will change everything.” I know we all have. But as I started to get familiar with it, I thought “Oh my goodness, this really is a change” and it brought back that experience, which was from my very first assignment teaching the Research Methods in Psychology course at a, well, I’ll just say it was a small liberal arts institution, not my graduate institution. So I’m at this new place with this new group of students, very high expectations, and the research methods course… I think all disciplines have a course kind of like this, where we kind of go from, “Oh, we’re consuming and discussing research or scholarship in this area” to “Okay, how are we going to produce this and getting those skills.” So it is challenging, and one of the big challenges was and still is, in different forms, the statistical analysis. So you can’t really design a study and carry it out in psychological sciences without a working knowledge of what numbers are we going to be collecting, what kind of data (and it usually is quantitative data), and what’s our plan? What are we going to do with it once we have it, and getting all that statistical output for the first time and interpreting it, that is a big deal for psychology majors, it always is. So students are coming, probably pretty anxious, to this new class with a teacher they haven’t met before. This is my first time out as the instructor of record. And I prepared and prepared and prepared as we do. And one of the things that I worked on was, at the time, our methodology for analyzing quantitative data. We would use a statistics package and you had to feed it command line style input, it was basically like writing small programs to then hand over to the package. And you would have to define the data, you’d have to say, “Okay, here’s what’s in every column and every field of this file,” and there was a lot to it. And I was excited. Here’s all this knowledge I’m going to share with you. I had to work for years to figure out all my tricks of the trade for how to make these programs actually run. And so I’ve got my stack of overheads. I come in, and I have one of those flashbulb memories. I walked into the lab where we were going to be running the analysis portion, and I look over the students’ shoulders, and many of them have opened up and are starting to mess around with and play around with the newest version of this statistics package. And instead of these [LAUGHTER] screens with some commands, what am I looking at? I’m looking at spreadsheets [LAUGHTER]. So the data is going into these predefined boxes. There’s this big, pretty colorful interface with drop down menus… All the commands that I had to memorize [LAUGHTER], you can point and click, and I’m just looking at this and going, “Oh no, what do I do?” And part of my idea for this article was kind of going back and taking apart what that was like and where those reactions were coming from. And as I kind of put in a very condensed form in the article, I think it really was one part just purely sort of anxiety and maybe a little bit of loss and saying, “But I was going to share with you how to do these skills…” partly that “Oh no, what do I do now?” I’m a new instructor. I have to draft all this stuff, and then partly, yeah, curiosity and saying, “Well, wait a minute, is this really going to do the same thing as how I was generating these commands and I know you’re still going to need that critical thinking and the top level knowledge of “Okay, which menu item do you want?” Is this going to be more trouble than it’s worth? Are students going to be running all the wrong analyses because it’s just so easy to do, and it’s going to go away.” So all of that complex mix is, of course, not identical to, but I think pretty similar to how I felt… maybe how a lot of folks are feeling… about what is the role of this going to be in my teaching and in my field, and in scholarship in general going forward?
Rebecca: So in your article, you talk a lot about experimenting with AI tools to get started in thinking about how AI is related to your discipline. And we certainly have had lots of conversations with faculty about just getting in there and trying it out just to see how tools like ChatGPT work to become more familiar with how they might be integrated into their workflow. Can you share a little bit about how you’d recommend for faculty or how you were thinking about [LAUGHTER] jumping in and experimenting and just gettin g started in this space?
Michelle: Well, I think perhaps, it also can start with a little bit of that reflection and I think probably your listenership has a lot of very reflective faculty and instructors here. And I think that’s the great first step of “Alright now, if I’m feeling worried, or I’m feeling a very negative reaction, where’s that coming from and why?” But then, of course, yeah when you get it and actually start using it the way that I had to get it and start using my statistics package in a brand new way, then you do start to see, “Okay, well, what’s great, what’s concerning and not great, and what am I going to do with this in the future? So experimenting with the AI tools, and doing so from a really specific perspective. When I started experimenting at first, I think I thrashed around and kind of wasted some time and energy initially, looking at some things that were not really education focused. So something that’s aimed at people who are, say, social media managers, and how this will affect their lives is very different than me as a faculty member. So make sure you kind of narrow it down, and you’re a little planful about what you look at, what resources you’re going to tap into, and so on. And so that’s a good starting point. Now, here’s what I also noticed about my initial learning curve with this. So I decided to go with ChatGPT, myself, as the tool I wanted to get the most in depth with. So I did that and I noticed really that, of course, like with any sort of transfer of learning situation, and so many of those things we do with our students, I was falling back in a kind of an old pattern. So my first impulse was really funny, it was just to ask it questions, because I think now that we’ve had several decades of Google under our belts and other kinds of search engines, we get into these AI tools, and we treat them like search engines, which for many reasons, they really are not. Now, this is not bad, you can certainly get some interesting answers. But I think it’s good to really have at the front of your mind to kind of transition from simply asking questions to what these tools really shine with, which is following directions. I think one of the best little heuristics I’ve seen out there, just very general advice, is: role, goal, and instructions. So instead of coming in and saying “what is” or “find” or something like that, what perspective is it coming from? Is it acting as an expert? Is it acting as an editor? Is it going to role play the position of a college student? Tell it what you’re trying to accomplish, and then give it some instructions for what you want it to do. That’s a big kind of step that you can get to pretty quickly once you are experimenting. And that’s, I think, real important to do. So we have that. And of course, we also want to keep in mind that one of the big distinguishing factors as well is that these tools have memory, your session is going to unfold in a particular and unique way, depending not just on the prompts you give it, but what you’ve already asked it before. So, once you’ve got those two things, you can start experimenting with it. And I do think coming at it from very specific perspectives is important as I mentioned because there’s so little super general advice, or discipline-independent advice that I think is really going to be useful to you. And so doing that, I think a lot of us we start in a sort of a low-stakes, tentative way with other interests we might have. So for example, one of the first things that I did to test it out myself was I had it work out a kind of a tedious little problem in knitting. So I had a knitting pattern, and there’s just a particular little counting algorithm where to put increases in your pattern that always trips us up. And I was about to like, “Oh, I gotta go look this up,” then I thought “You know what, I’m gonna see if ChatGPT can do this.” And it did that really well. And by doing that in an area where I kind of knew what to expect, I could also push its parameters a little bit, make sure is this plausible? is what it’s given me… [LAUGHTER] does that map onto reality? and I can fact check it a little bit better as I go along. So those are some things that I think that we can do, for those who really are starting from scratch or close to it right now.
John: You’re suggesting that faculty should think about how AI tools such as this… and there’s a growing number of them, it seems more are coming out almost every week…, how they might be useful in your disciplines and in the types of things you’re preparing students for, because as you suggested it’s very different in different contexts. It might be very different if you’re teaching students to write than if you’re teaching them psychology or economics or math. And so it’s always tempting to prepare students for the way we were prepared for the world that we were entering into in our disciplines. And as you suggest in the article that we really need to prepare students for the world that they’re going to be entering. Should people be thinking about how it’s likely that students will be using these tools in the future and then helping prepare them for that world?
Michelle: Yeah, that’s a really good way to start getting our arms around this. In kind of the thinking that I’ve been doing and kind of going through this over the last couple of months… that just absolutely keeps coming up as a recurring thing, that this is so big, complicated, and overwhelming, and means very different things for different people in different fields. Being able to kind of divide and break down that problem is so important. So, yeah, I do think that and, for example, one of the very basic things that I’ve made some baby steps towards using myself is, ChatGPT is really good at kind of reformulating content that you give it, expanding or condensing it in particular. The other day, for example, I was really kind of working to shape a writing piece, and I had sort of a longer overview and I needed to go back and kind of take it back down to basics and give myself some ideas as a writer. So I was not having it write any prose for me. But I said, “Okay, take what I wrote and turn it into bullet points” and it did a great job at that. I had a request recently from somebody who was looking at some workshop content I had and said, “Oh, we really want to add on some questions where people can test their own understanding.” And you know, as the big retrieval practice [LAUGHTER] advocate and fan of all time, I’m like, “Oh, well, that’s a great idea. Oh, my goodness, and I’m gonna have to write this, I’m on a deadline.” And here too, I got, not a perfectly configured set of questions. but I got a really good starting point. So I was able to really quickly dump in some text and some content and say,”Write this many multiple choice and true/false questions.” And it did that really, really well. So those are two very elementary examples and some things that we can get in the habit of doing as faculty and as people who work with information and knowledge in general.
Rebecca: I’ve used ChatGPT, quite often to get started on things too, and generate design prompts, all kinds of things and have it revise and add things and really get me to think through some things and then kind of I do my own thing. But I use that as a good starting point to not have a blank page.
Michelle: Absolutely. Yeah, the blank page issue. And I think where we will need to develop our own practice is to say, “Okay, make sure we don’t conflate or accidentally commingle our work with ChatGPT’s, as we figure out what those acceptable parameters are.” But that reminds me too, I mean, we all have the arenas where we shine and the arenas where we have difficulty as, again, as faculty, as working professionals. I know graphic design is your background. I’m terrible. I’m great at words, but it reminds me, one of the things that I kind of made myself go and experiment with was creating a graphic, just for my online course that’s running right now, which would, for me, that would typically be a kind of an ordeal of searching and trying to find something that was legitimate to use and a lot of clipart, and I had it generate something. Now, I do not advise putting in like “exciting psychology image in the style of Salvador Dali,” [LAUGHTER] and seeing what comes out. He was not the right choice. It was quite terrifying. But after a lot of trial and error, I found something that was serviceable and there too, it’s not like I need to develop those skills. If I did, I would go about that very, very differently. But it’s something that I need in the course of my work but it’s a little outside of my real realm of expertise. So helpful there too. So yeah, the blank page… I think you really hit on something there.
John: Now did you use DALL-E or Midjourney or one of the other AI design tools to generate that image?
Michelle: Oh my goodness. Well, here again, [LAUGHTER] I was really out of the proverbial comfort zone for myself is really going to show. I did use DALL-E and I really wrestled with it for a couple of reasons. And so, as a non-graphic person, it did not come easily to me. Midjourney as well, if you’re not a Discord user, you’re really kind of fighting to figure out that interface at the same time and those that are familiar with cognitive load concept of [LAUGHTER] “I’m trying to focus on this project, but all this other stuff is happening. And then I had a good friend who’s a computer engineer and designs stained glass as a hobbyist [LAUGHTER] and kind of took my hand and said, “Okay, here’s some things you can do.” It actually came up with something a lot prettier, I have to say.
John: You had just mentioned two ways in which faculty could use this to summarize their work or to generate some questions. Not all faculty rely on retrieval practice in an optimal manner. Might this be something that students can use to fill in the gaps when they’re not getting enough retrieval practice or when they’re assigned more complex readings then they’re able to handle.
Michelle: Yeah, having the expertise is part of it, and I think we’re going to see a lot of developing understanding of that really cool tradeoff and handoff between our expertise and what the machine can do. I’m kicking around this idea as well, so I’m glad you brought that up. A nice side effect could be a new era for retrieval practice, since that is something of a limiting factor is getting quality prompts and questions for yourself. It’s funny, one of the things that I did do while taking a little prompt engineering course right now to try to build some of these skills and the facility with it. And one of the things they assigned was a big dense article [LAUGHTER] on prompt engineering, which was really great, but a little out of my field, and so I’m kind of going “Well, did I get that?” And then I thought, I better take my own medicine here and say, “Well, what’s the best way to ensure that you do and to find out if you don’t have a good grasp of what you were assigned?” And I was able to give it the content, I gave it, again, a role, a goal, and some instructions and said “Act as a tutor or a college professor, take this article, and give me five questions to test my knowledge. And then I told it to evaluate my answers [LAUGHTER] and see whether it was correct.” So that was about as meta as you can get, I think, in this area right now. So I’ve done it. And here again, it does a pretty good job, actually an excellent job. Do you want to use it for something super high stakes, probably not, especially without taking that expert eye to it. But wow, here’s something, here’s content that was challenging to me personally. It did not come with built in retrieval practice, or a live tutor to help me out with it. I read it, and I’m kind of going, “I don’t know, I don’t have a really confident feeling.” So I was able to run through that. And so yeah, that could be one of the initial steps that we suggest to students as a potentially helpful and not terribly risky way of using these really powerful new tools.
Rebecca: One of the things that this conversation is reminding me of and some of the others that we’ve had about ChatGPT is we have to talk a little bit about how students might use it in an assignment or something, or how we might coach a student to use it. But we don’t often talk a lot about ways that students might just come to a tool like this, and how they’re just going to use it on their own without us having any [LAUGHTER] impact. I think, often we jump to conclusions that they’re gonna have a tool write a paper or whatever. What are some other ways that we can imagine or roleplay or experiment in the role of a student to see how a tool like this might impact our learning?
Michelle: So that is another kind of neat running theme that does come up, I think, with these AI tools is role playing. I mean, this is what it’s essentially doing. And so having us roleplay the position of a student or having it evaluate our materials from the perspective of a student, I think, could be useful. But, it kind of reminds me let’s not have a total illusion of control over this. I think, as faculty, we have a very individualistic approach to our work. And I think that’s fine. But yeah, there’s a lot happening outside of the classroom that we should always have in mind. So just like with me on that hyper planned first course that I was going to be teaching, it just happened and students were already out there experimenting with “Oh, here’s how I can complete this basic statistics assignment with the assistance of this tool I’m going to teach myself. So that could be going on, almost certainly is going on, out there in the world of students. And it’s another time to do something which I know I have to remind myself to do, which is ask students and really talk to them about it. Early on, I think there was a little bit of like, “Oh, this is a sort of a taboo or a secret and I can’t talk to my professors about it and I want to broach it and professors, they didn’t want to broach it with their students because we don’t want to give anybody ideas or suggest some things are okay where they’re not. But I think we’re at a good point to just kind of level with our students and ask them “How do you think we could bring this in?” I think next semester, I’m going to run maybe an extra credit assignment and say, “Oh, okay, we’re gonna have a contest, you get bragging rights, and maybe a few points to “What is a good creative use of this tool in a way that relates to this class? Or can you create something, kind of a creative product or some kind of a demonstration that in some way ties to the class?” And I’ve learned through experience when I’m stumped, and I don’t quite know where to go with a tool or a technique or a problem, take it to the students and see what they can do with it.
Rebecca: I can see this is a real opportunity to just ask the students, how are they using it, and then take a look at the results that it’s creating. And then this is where we can provide some information about how expertise in a field [LAUGHTER] could actually make that better why that result is in what they think it is.
Michelle: Absolutely, and some of the best suggestions that I’ve seen out there, I’m kind of eagerly consuming across a lot of disciplines as much as I can to look at those suggestions. The most intriguing ones I’ve seen are kind of with things with a media literacy and critical thinking flair that tells students “Okay, here’s something to elicit from your AI tool that you’re using, and then we, from our human and expert perspectives, are going to critique that and see how we could improve it. So here too, critical thinking and those kinds of evaluation skills and abilities are some of the most prized things we want students to be getting in higher education. And they are simultaneously. for many different reasons, they are some of the hardest. So if we can bring that to bear on the problem, I think that can be a big benefit.
John: In the article, you suggested that faculty should consider introducing some AI based activities in their classes. Could you talk a little bit about some that you might be considering or that you might recommend to people?
Michelle: One of the things that I am going to be teaching, actually for the first time in a very long time, is a writing in psychology course, which has the added challenge of being fully online asynchronous, so that’s going to be coming up pretty soon for me. It’s still under construction, as I’m sure a lot of our activities and a lot of things are that we’re thinking about in this very fluid and rapidly developing area. I think things like outlining, things like having ChatGPT suggest improvements, and finding ways for students to also kind of track their workflow with that. I do think that one of the things that in our different professional [LAUGHTER] lives, because as I mentioned in the article, I think that should really lead the way of what work are we doing as faculty and as scholars in our particular areas. One of the things we’re going to have to be looking at is alright, how do I manage any output that I got from this and knowing what belongs to it and what was generated by me. What have I already asked it? If they’re particularly good prompts, how do I save those so I can reuse them? …another really good thing about interacting with the tools. But, I’m kind of playing around with some different ideas about having students generate maybe structures or suggestions that they can work off of themselves. And having CHATGPT give them some feedback on what they’ve developed so far. So one of the things you can ask it to do is critique what you tell it, so [LAUGHTER] you can say, “Okay, improve on this.” And then you can repeat, you can keep iterating on that, and you can keep fine tuning in different areas. You can also have it improve on its own work. So once it makes a suggestion you can, I mean, it’s virtually infinite what you can tell it to go back and do: to refocus, expand, condense, add and delete, and so on. So that’s kind of what I am shaping right here. I think too, at the Introduction to Psychology level, which is the other level that I frequently teach within, I’m not incorporating it quite yet. But I think having students have the opportunity or option to create a dialogue, an example, maybe even a short play or skit that it can produce to illustrate some concepts from the book and there ChatGPT is going to be filling in kind of all the specifics, the student won’t be doing it, but it’ll be up to them to say, “Well, what really stood out to me in this big, vast [LAUGHTER] landscape of introductory material that I think would be so cool to communicate to another person in a creative way?” And this can help out with that. I’m also going to be teaching my teaching practicum for graduate students coming up as well. And, of course, I want to incorporate just kind of the latest state of the art information about it. But also, it’s supposedly, I haven’t tried it myself yet, but supposedly it’s pretty good at structuring lesson plans. We don’t do formal lesson plans the way they’re done in K through 12 education, of course, but to give it the basics of an idea and then have a plan that you’re going to take into a course since that’s one of the things they do in that course is produce plans for courses and I gotta say it’s not a critical skill, the formatting and exactly how that’s all going to be laid out on the page, is not what they’re in the class to do. It’s to really develop their own teaching philosophy, knowledge, and the ability to put those into practice in a classroom. So if it can be an aid to that, great, and I also want them to know what the capabilities are if they haven’t experimented with them yet, so they can be very aware of that going into their first classes that they teach.
Rebecca: When you mentioned the example of a writing intensive class that’s fully asynchronous online, I immediately thought of all of the concerns [LAUGHTER], and barriers that faculty are really struggling with in really highly writing intensive spaces, and then fully online environments, especially around things like academic integrity. Can you talk a little bit about [LAUGHTER] some of the things that you’re thinking about as you’re working through how you’re gonna handle AI in that context?
Michelle: As I’m been talking with other faculty right now, one of the things that I really settled on is the importance of keeping these kind of threads of the conversation separate and so I’m really glad we’re kind of piecing that out from everything [LAUGHTER] else. Because once again, it’s just too much to say, well, on the one hand, how to prepare students and give them skills they might need in the future? How do I use it to enhance learning and oh my gosh, is everybody just going to have AI complete their assignments? It’s kind of too much at once. But once we do piece that out, as you might pick up on that I’m a little enthusiastic about some of the potential, does not mean I don’t think this is a pretty important concern. So I think we’re gonna see a lot of claims about “Oh, we’re going to AI proof assignments and I think probably many of your listeners have already run across AI detection tools and the severe problems with those right now. So I think we have to just say right now, for practical purposes, no, you cannot really reliably detect AI written material. I think that if you’re teaching online especially, I think we should all just say flat out that AI can take your exams. If you have really conventional exams, as I did before [LAUGHTER] this semester in some of my online courses, if you’ve got those, it can take those. And just to kind of drive home to folks, this is not just simple pattern matching, looking up your particular question that floated out into a database, no, it’s processing what you’re putting in. And it’s probably going to do pretty well at that. So for me, I’m kind of thinking about, in my own mind, a lot of these more as speed bumps. I can put speed bumps in the road, and to know what speed bumps are going to at least discourage students from just dumping the class work into ChatGPT. To know what’s effective, it really helps to go in and know what it does well and what it really stumbles on, that will give you some hints about how to make it less attractive. And that’s kind of what I’m settling on right now myself, and what I’ve shared with students, as I’ve spoken with them really candidly to say I’m not trying to police or catch people, I am not under an illusion that I can just AI proof everything. I want to remove obvious temptation, I want to make it so a student who otherwise is inclined to do the right thing, wants to have integrity and wants to learn doesn’t go in feeling like, “Oh, I’m at a disadvantage If I don’t just do this, it’s sitting right there.” So creating those nudges away from it, I think, is important. And yeah, I took the step of taking out conventional exams from the online class I’m teaching right now. And I have been steadily de-emphasizing them more with every single iteration. I think those who are into online course design might agree well, maybe that was never really a good fit to begin with. That’s something that we developed for these face-to-face environments, and we just kind of transplanted it into that environment. But I sort of ripped off that [LAUGHTER] bandaid and said, “Okay, we’re just not going to do this. I’ve put more into the other substance of the course, I put in other kinds of interactions. Because if I ask them Psychology 101 basic test questions, even if I write them fresh every time, it can answer those handily, it really can.
John: Recently, someone ran the Test of Understanding in College Economics through with the micro and macro versions. And I remember on the macro version ChatGPT-4 scored at the 99th percentile on this multiple choice quiz, which basically is the type of things that people would be putting in their regular tests. So it’s going to be a challenge because many of the things we use to assess student’s learning can all be completed by ChatGPT. What types of activities are you thinking of using in that online class that will let you assess student learning without assessing ChatGPT’s or other AI tools’ ability to represent learning?
Michelle: Well, I’ll share one that’s pretty simple, but I was doing anyway for other reasons. So just to take one very simple example of something that we do in that class, I really got an a big kick with Kahoot!, especially during the heyday of fully hybrid teaching where we were charged, as faculty, I know at my institution, where you have to have a class that can run synchronously with in-person and remote students at the same time, and run [LAUGHTER] asynchronously for students who need to do their work at a different time phase. And that was a lot and Kahoot! was a really good solution to that. It’s got a very K through 12 flavor to it, but most students just really take a shine to it anyway. And it is familiar to many of them from high school or previous classes right now. So it’s a quiz game, runs a timed gamified quiz. So students are answering test questions in these Kahoot!s that I set up. And because it has that flexibility, they have the option to play the quiz game sort of asynchronously on their own time, or we have those different live sessions that they can drop in and play against each other and against me. So that’s all great. But here’s the thing, prior to ChatGPT, I said I don’t want to grade this on accuracy, which feels really weird, right, as a faculty member to say, well, here’s the test and your grade is not based on the points you earn for accuracy. It’s very timed, a little hiccup in the connectivity you have at home can alter your score, and I just didn’t like it. So what students do is for their grade, they do a reflection. So I give the link to the Kahoot!, you play it, and then what you turn into me is this really informal and hopefully very authentic reflection, say, “Well, how did you do? What surprised you the most? Were there particular questions that tripped you up?” And also kind of getting them to say, “Well, what are you going to do differently next time?” And for those who are big fans of teaching metacognition, I mean, that comes through loud and clear, I’m sure. So every single module they have this opportunity to come in and say, “Okay, here’s how I’m doing, and here’s what I’m finding challenging in the content.” Is it AI proof? Absolutely not. No, it really isn’t. But it is, at least I think at that tipping point where the contortions you’d have to go through to come up with something that is gonna pass the sniff test with me, and if I’ve now read 1000s of these, I know what they tend to look like. And Kahoot!s are timed. I mean, could you really quickly transfer them out and type them in? Yes. It’s simply a speed bump. But the time would make that also a real challenge to kind of toggle back and forth. So I feel good about having that in the class. And so it’s something again, I’ve been developing for a while, I didn’t just come up with it, fortunately, the minute that ChatGPT really impinged on this class, but it was already in place. And I kind of was able to elevate that and have that be part of it. And so they’re doing that. I do a lot of collaborative annotation, I continue to be really happy with… I use Perusall. I know, that’s not the only option there is, but it’s great. They’ve got an open source textbook. And they’re in there commenting and playing off each other in the comments. So that is the kind of engagement I think that we need in force anyway, it is less of a temptation. And so I feel like that’s probably better than having them try to quickly type out answers to, frankly, pretty generic definitions and so on that we have in that course. Some people are not going to be happy with that, but that’s really truly what I’m doing in that course instead.
John: Might this lead to a bit of a shift to more people using ungrading techniques with those types of reflections as a way of shifting the focus away from grading, which would encourage the use of ChatGPT or other tools to focus on learning, which might discourage it from being used inappropriately?
Michelle: What a fantastic connection. And you know what? When I recently led a discussion with faculty in my own department about this, that is actually something that came up over and over just, it’s not ungrading, because not everybody is even kind of conversant with that concept. But how there are these trends that have been going on for a while of saying, you know, is a timed multiple choice test really what I need everything to hinge on in this online course. Ungrading, this idea of kind of, I think there’s this emerging almost idea I’ll call both sides-ism, or collaboration between student and teacher, which I think was also taking root through the pandemic teaching and that came to the forefront with me of just saying, “Okay, we’re not going to just be able to keep running everything the same way traditionally it’s been run,” which sometimes does have that underlying philosophy of, “Okay, I’m going to make you do things and then you owe me this work, and I’m going to judge it and you’re going to try to get the highest points with the least effort. I mean, that whole dynamic, that is what I think powers this interest in ungrading, which is so exciting, and it’s gonna maybe be pushed ahead by this as well. Ultimately, the reason why you’re going to do these exercises I assign to you is because you want to develop these skills. You are here for a reason, and I am here to help you. So that is, I think, a real positive perspective we can bring to this and I would love to see those two things wedded together, especially now that tests can be taken by ChatGPT, then, we should relook at all of our evaluation and sort of the underlying philosophy that powers it.
John: One of the concerns about ChatGPT is it sometimes makes mistakes, it’s sometimes will make stuff up, and it’s also not very good with citations. In many cases, it will just completely fabricate citations, where it will get the authors of people who’ve done research in the field, but will grab other titles or make up other titles for their work. Might that be a way in which we could could give students an assignment to use one of these tools to generate a paper or a summary on some topic, but then have them go out and verify the arguments made and look for citations and document it just as a way of helping prepare them for a world where they have a tool which is really powerful, but is also sometimes going off in strange directions, so that they can develop their critical thinking skills more effectively.
Michelle: Yeah, looping back to that critical thinking idea. Could this also be a real way to elevate what we’ve been doing and give us some new options in this really challenging and high value area? And yes, this is another thing that I think faculty hopefully will discover and get a sense of as they experiment themselves. I think probably a lot of us have also experimented with, just ask it about yourself. Ask it, what has Dr. Michelle Miller written? There’s a whole collaborator [LAUGHTER] I have never heard of, and when it goes off the rails, it goes. And it’s one thing to say really kind of super vaguely to say like, “Oh, AI may produce output that can’t be trusted.” And that has that real like, okay, caution, but not really, feel to it. That’s a whole other thing to actually sit with it and say, alright, have it generate these citations. They sure do look scholarly, don’t they really look right? Okay, now go check them out. And say, this came out of pure thin air, didn’t it? Or it was close, but it was way off in some particular way. So as in so many areas, to actually have the opportunity to say, okay, generate it and then look at it, and it’s staring you right there in the face some of the issues. So I think that we will see a lot of faculty coming up with really dynamic exercises that are finely tuned to their particular area. But yeah, when we talk about writing all kinds of scholarly writing and research in general, I think that’s going to be a very rich field for ideas. So I’m looking forward to seeing what students and faculty come up with there.
Rebecca: That’s a nice lead into the way that we always wrap up, Michelle, which is to ask: “what’s next?”
Michelle: Well, gosh, alright. So I’m continuing to write about and disseminate all kinds of exciting research findings. I’ve got my research base substack, that’s still going pretty strong. After summer. I actually focused it on ChatGPT and AI for a couple of months. But now I’m back to more general topics in psychology, neuroscience, education, and technology. So articles that pull in at least three out of four on those. I’ve got some other bigger writing projects that are still in the cooker. And so I’ll leave it at that with those and I’m continuing to really develop what I know about and what I can do with ChatGPT. As I was monitoring this literature, it was really very clear that we are at a very, very early stage of scholarship and applied information that people can actually use. Those are all things that are very much on the horizon for my next couple of months.
Rebecca: Well, thank you so much, Michelle, we always enjoy talking with you. And it’s always good to think through and process this new world with others.
John: It certainly keeps things more interesting and exciting than just doing the same thing in the same way all the time. Well, thank you.
John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.
Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.
Ganesh: Editing assistance by Ganesh.