Since its arrival in late November 2022, ChatGPT has been a popular topic of discussion in academic circles. In this episode, Betsy Barre joins us to discuss some of the ways in which generative AI tools such as ChatGPT can benefit faculty and students as well as some strategies that can be used to mitigate academic integrity concerns. Betsy is the Executive Director of the Center for Advancement of Teaching at Wake Forest University. In 2017 she won, with Justin Esarey, the Professional and Organizational Development Network in Higher Education’s Innovation Award for their Course Workload Estimator.
- Course workload estimator
- Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.
- Barre, Betsy (2021). “Student Workload.” Tea for Teaching podcast. Episode 183. April 14.
- Lang, J. M. (2013). Cheating lessons. Harvard University Press.
- Talbert, Robert and David Clark (2023, forthcoming). Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education, Stylus Publishing.
John: Since its arrival in late November 2022, ChatGPT has been a popular topic of discussion in academic circles. In this episode, we discuss some of the ways in which generative AI tools such as ChatGPT can benefit faculty and students as well as some strategies that can be used to mitigate academic integrity concerns.
John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.
Rebecca: This podcast series is hosted by John Kane, an economist…
John: …and Rebecca Mushtare, a graphic designer…
Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.
Rebecca: Our guest today is Betsy Barre. Betsy is the Executive Director of the Center for Advancement of Teaching at Wake Forest University. In 2017 she won, with Justin Esarey, the Professional and Organizational Development Network in Higher Education’s Innovation Award for their Course Workload Estimator. Welcome back, Betsy.
Betsy: Thanks. It’s so good to be back.
John: We’re really happy to talk to you again. Today’s teas are… Betsy, are you drinking tea?
Betsy: Yeah, actually, I was really excited. I’ve Chai spice tea. I was really excited when y’all invited me back because I’ve actually made a decision to stop drinking coffee as much as I have in the past. So I thought I’d be into all these exotic teas by the time that we recorded this, but nope, just a boring chai tea for today. But maybe next time when I come back, I’ll have some interesting teas for you.
Rebecca: We’ll make sure we ask you to level up next time, Betsy.
Rebecca: I have a cup of cacao tea with cinnamon.
John: And I have a pineapple ginger green tea today.
Betsy: You all are inspiring me. I love it.
Rebecca: Did you say pineapple, John?
Rebecca: Is this a new one?
John: No, it’s been in the CELT office for a while. It’s a new can of it, It’s a Republic of Tea tea.
Rebecca: I feel like it’s not one of your usual choices.
John: You said that the last time I had this. [LAUGHTER]
Rebecca: Yes, I just don’t associate this tea with you.
Betsy: You have a block.
John: I think I’ve only had it on the podcast two or three times.
Rebecca: Just a couple. [LAUGHTER] I just don’t remember. Clearly. Okay. [LAUGHTER] We’ll move on. We’ve invited you back, Betsy to talk about ChatGPT. We know you’ve been writing about it, you’ve been speaking about it, and everyone’s concerned about it. [LAUGHTER] But maybe we can start first by talking about ways that faculty might use tools such as this one to be productive in our work.
Betsy: When I discovered ChatGPT, the way that I discovered it, which was back in December, was that I had a colleague who sent a screenshot of asking them to draft a syllabus. And so my first encounter was actually with ChatGPT, doing something that would help teachers. It’s also the case that I’m a teaching center director, so, of course, I’m thinking of these things, but it has certainly shaped what was possible. And it blew my mind what it was capable of doing in a great degree of detail, actually. And then about a month later, I was working on a curriculum project where I was having to draft learning outcomes. And that’s a task that we do in the teaching center a lot, and always getting it precisely right and not really sure what’s the different ways that we can phrase this so that it’s actually measurable. And so I just started playing around with what was its capabilities in terms of learning outcomes, and I saw that it was actually pretty impressive and generative there. And then back then, when there was only GPT3, I kept trying to see if it could do curriculum maps for us. And I really had to force it and really think hard about my prompts to get it to actually map outcomes to courses and curriculum. But then when GPT4 came out, I tried it again. And I thought I was going to have to do it step by step. But this time, I tried with a philosophy curriculum, and I said, I want 15 courses, I want them to have three to five outcomes each, students need to take a certain number of courses, we want them to have each outcome three to five times and just sort of gave broad guidance. And it gave me a full curriculum as well as a map. And it was actually a very good philosophy curriculum. So it came up with the outcomes. It came up with the courses, I was floored, and it was my first request. So there are many other things I think we can use ChatGPT for in terms of our teaching, but the curriculum was really, I think, one of the most complex things that I’ve seen it do.
John: I saw you do that. And so I experimented to have it develop a whole major program, with course descriptions and learning outcomes for the program, as well as for each individual course. And it did a remarkably good job of it.
Betsy: Yeah, I was amazed because I didn’t really give it much of a prompt. And it had within the philosophy major, like comparative philosophy, issues of diversity, environmental philosophy. So it wasn’t the typical things that you would expect in a philosophy major, it was actually quite innovative in some ways. And I appreciated that. From the perspective of a teaching center consulting with administrators and faculty on curriculum, one of the things we often see is that the little blurbs in our handbooks or bulletins for students to see the descriptions of the courses, they’re about 150 words. And often they’re very much teacher centered. So here’s the topic of the course: in this course, you will study this, this, and this. And one of the biggest challenges is how do we turn those into outcomes. And so I actually tried to do that too, is I went through our bulletin and just threw in those 150-word descriptions of the topics, and had them develop three to five outcomes that were measurable. And it did pretty remarkably. And so I think that could be a useful starting place. Again, with a lot of this stuff, you don’t want to just take it as is, but a useful starting place to help our faculty and our curriculum committees brainstorm. And in about a week, we are going to do a course design institute at Wake Forest. We do it every summer, and I’m really eager to have my colleague Kristi Verbeke, and my other colleague, Anita McCauley, experiment with using ChatGPT as part of the process in the course design institute to see if it helps them speed up or get more ideas as they’re generating various aspects of design of their course, not just outcomes, but all the way down the line of the steps of course design.
Rebecca: Sometimes it can be really hard to get started, but as soon as you have a start, you know what you want.
Betsy: That’s right. And one response you might imagine to the fact that ChatGPT can draft learning outcomes is you might imagine someone saying, “Well, that’s a clear sign that it’s pretty easy and meaningless tasks to think to be able to draft the learning outcomes.” But what I have found is that when I, not just my colleagues, but when I have a really concrete learning outcome that’s measurable, it helps me design the course better, like it’s just so much easier to think immediately of an assignment. But when it’s vague, and it’s kind of like, I don’t really have it really clear, in my mind, it’s so much harder to do all the other steps. And so even if we think it’s a somewhat trivial task, having ChatGPT help our colleagues come up with really clear learning outcomes will help speed up everything else, at least that’s my hypothesis and we’re gonna see how that goes this summer.
Rebecca: We’ve played around a little bit of using like those course descriptions that might appear in a catalog and turning it into marketing language, which is very different.
Betsy: Oh, that’s so interesting. And has it worked well?
Rebecca: Yeah, I think it’s definitely a starting place to move it to a different kind of language.
Betsy: So I’m teaching a first-year seminar in the spring, and I’m an ethicist, so I teach a course on sexual ethics. And the last time I taught it, I had a pretty conservative title. And it was interesting, I only had women in the class, cis women in the class, there were no men that had signed up, which had not been the case before when I’ve taught that class. So I actually used it to say, I want to attract a diverse group of 18 year olds, or 20 year olds, first-year students to this class, what are some titles or some quick summaries that I might use, and it was really fun to see some of the ideas it gave me. I ended up mashing a bunch together, again, taking pieces of it as an expert and pulling it together. But it certainly got me thinking in a way that would have taken me much longer if I didn’t have that help.
John: I used ChatGPT to create an ad for the Tea for Teaching podcast just to see how it would work and I posted it on Facebook, and I got quite a few responses from people saying, “I use this all the time in my work.”
John: This is a tool that’s out there and that came up really quickly, but it’s still a really early stage of this. And a lot of faculty are really concerned about issues of academic integrity, and so forth. And we can talk a little bit about those. But we have to prepare students for the world in which they’re living. And the world in which they’ll be living is one where AI tools are going to be ubiquitous. So you do a lot of work with ethics. How can we help students learn how to ethically use ChatGPT, in college and beyond?
Betsy: Yeah, I think it’s actually a fabulous question. And one of the things I’ve often said, a lot of folks come to me to talk about ChatGPT in terms of teaching and learning. And of course, I have lots of thoughts about that. But I actually have been particularly consumed with reading about the much bigger questions about what AI means for humanity, to be quite frank. There are really dramatic and important questions that we need to think about. And in fact, I think sometimes what I have seen is sometimes people will think that that’s just hype: “Oh, that AI might take over the world, or that it might have these dramatic effects.” But if you actually talk to people who are experts in artificial intelligence, they’re really worried. And when the experts are really worried, it makes me very worried. So when we think about preparing our students, on the one hand, you can think about it as preparing them to use a tool that they need to use for their career, kind of like, “I need to teach them how to use Excel, or I need to teach them how to do basic productivity tools.” And that’s really important. Don’t get me wrong. In fact, like a lot of students don’t learn how to use Excel, and they don’t learn how to use these productivity tools. I have colleagues that I’m teaching these things to where I’m like, “Oh, you didn’t realize you could use this, it makes your life a lot easier.” But I think the bigger issues are preparing them to think about the potential implications to really understand what the tool is doing and what that means for how we understand human intelligence, how we think about consciousness. I mean, what it means for whether we want to have a world in which there are artificial intelligences that we might have moral obligations to. I mean, all sorts of huge, huge questions. Now, I don’t think all teachers need to address those issues. Just like all teachers probably don’t need to teach the technical stuff. But I certainly think when we are thinking about curriculum, it’s essential that our institutions think about helping our students think critically and philosophically about what artificial intelligence means. And I think perhaps my guess is like some of our students or like, our faculty haven’t played around with it a lot or kind of like, “it’s just another thing like Grammarly, it’s not that big of a deal.” But we have found at Wake Forest that when we invite experts in, so linguists, or computer scientists, or machine learning folks, or ethicists to come and talk about these tools and how they really work. Folks have their eyes opened, and then realize, “Oh, this is a bigger deal than we thought it was and we might need to think about regulation [LAUGHTER] and what comes next.” So policy issues, not just ethics issues as well. So we don’t have an answer except for the fact that we need to be talking about it. I have some ideas myself about what I think regulation should be, et cetera. But I do think our students shouldn’t just be seeing it as a tool to make their lives easier, although it is, it also is important for them to think through the implications for society. And then I guess also as another ethical piece, obviously, is that, as we address the issue of academic honesty, helping our students think about their reasons for choosing to take liberties that they were not authorized to do and thinking about their own character. And that’s going to have to be an approach that is somewhat different than just punishment to help our students behave in ways that we wish them to.
Rebecca: I know that my colleagues and I have had some really interesting conversations around AI related to visual culture and creating visual items, because a lot of the libraries of images are copyright protected. And what does it mean when you’re taking something that has these legal protections and mash them up into something new? And then whose property is it? So they lead to really interesting conversations, and so you start thinking about it as a maker and your work being a part of like a library of something, and then also, when you’re using work that’s created, what does that mean? So one of the things that we’ve been talking about is there’s policy at all levels, like what’s our departmental policy around these things? And what kind of syllabus statements or things might we do to be consistent across courses?
Betsy: Yeah, and I think one of the most important things, and it’s gonna take some time, is for all of us to get clear on what we think our policies or our positions are going to be about what is appropriate and what’s not appropriate. And then once we do, to really communicate that to students, because I think they’re in a place right now where it’s all over the map. And many instructors aren’t actually sharing that with them. And so I think that gets us in a fuzzy situation where students assume “Well, if this professor said this, then this professor would be okay with it.” And often it’s very different. And so how do we at least have a conversation at the beginning of the semester with our students about what we think? And I actually think, as you point out, Rebecca, it’s a learning opportunity too for students to co-construct some of those positions. So let’s talk about the reasons why we might want to not just say it’s a free for all. We can talk about the value of art and the value of our work as artists, and what does it mean to just use somebody else’s work without acknowledging it? And maybe there are ways to acknowledge it. And unfortunately, one of the challenges of these image generators is that we don’t necessarily know what it’s drawing on. And so that’s one interesting regulation is: could there be a way? I mean, I don’t know. It’s tough. So one of the challenges with the science of this stuff is that often those who create it don’t know how it’s working. [LAUGHTER] And they will tell you that, that it’s a black box. And so to be able to get in there and say, “Well, I will reveal it to you.” I think sometimes folks assume they’re not telling us because they want it to be proprietary. But often, they’re not telling us because they don’t actually know how the algorithm was developed or is doing its work. And so that’s a really tricky situation. But when we did a number of series of workshops for our faculty this semester, and one of them was we brought in some experts, and we had some copyright experts and some lawyers that came in and talked about this, and really fascinating questions about copyright in our work that, again, is a great opportunity for students to learn that question in a real live way that they see happening.
John: Going back to the whole issue of copyright, in terms of human history, that whole concept is relatively new. And when artists created new work, they started by copying the work of others, and they added their own twist. And in general, in pretty much all academic disciplines, the work that people are doing now is built on the work that others have done before, this. Is what ChatGPT is doing, in part, just the same type of thing that humans were doing, except instead of spending years learning how to do this, and building on it slowly over centuries, it’s doing it in a few milliseconds.
Betsy: Yeah, and I’m not an expert on arts, and so I’m sure there are lots of experts, and Rebecca, you can jump in here as well. But I would say that there are certainly questions about: Is it harming? That’s the question, often with ethics, we’re asking. Is itt harming anyone to engage in this practice. And even if we don’t know we’re using somebody else’s work, we often are. Our ideas build on one another, etc. But of course, in a capitalist society where artists make money based on their work, there become new questions about how do I preserve my livelihood in this particular context? Now, again, if there was a different context in which we supported our artists, so that they didn’t need to make money off of their work, because we gave them a basic income, there may be a different question involved there. And so actually, I mean, I think the economic questions, so I’m tying you both together here. So economics and art, this is great. The economic questions are really interesting about what does this mean for the future of labor? And how do we think about work in the future? I mean, granted, now, it seems like it’s not going to be immediate, but there might be long-term implications for all of us that we need to rethink as well. So I don’t know, Rebecca, you have thoughts about that?
Rebecca: Yeah. I mean, I think it’s interesting. I mean, John’s pointing really to the printing press is when copyright came about, when there it was easier and less time consuming to make copies of things. And then in 1998 copyright law changed again, because of the ability of making digital files, copies, so easy.
Betsy: Napster. [LAUGHTER]
Rebecca: Yeah, copyright law hasn’t kept up with technology over time. So there’s constantly these conversations about technology and creative work and what is it mean? I come from computer art. So generative art is a thing that we do, and that’s algorithm-based and you would argue that the machine is collaborating, in some ways, you write the algorithm. So I think there’s a trajectory of this has been happening for a long time. But it does raise a lot of interesting questions. And I think it’s really important for our students to grapple with, and really critically think about, and for us to critically think about together. In some ways, it’s nice because it gives us something to have a good constructive conversation around and really sort through it together.
John: And then maybe a less positive note, in terms of the economics behind this, there have been a lot of stories of people taking two or three jobs on and using chatGPT, to do two or three times as much work as they did before. And one of the issues I’ve addressed with my students in my labor economics class is, if we have these tools that can do the work that college graduates used to do, will there still be a demand for college graduates to do these tasks. Most technological change in the past ended up replacing less skilled workers, and provided a really nice return to those who had college degrees. But this type of innovation might very well be hitting a little bit more heavily on college graduates than most previous innovations.
Betsy: Yeah, it’s hard to actually talk about this, because I feel like every week it gets better and better. And so I could say, “Well, currently, here’s the set of skills, if we’re an expert, we can use it to sort of level up a bit.” So as I shared with just the curriculum mapping, I’m able to ask it things, and then because I’m an expert, I’m able to do things with it and ask appropriate prompts that push it right in the direction I want it to go and then I produce this wonderful outcome, ehereas sometimes I tried just to show my faculty to put in questions from like physics or something and I couldn’t really assess whether it was a appropriate answer or not, or how I had to push it. And so there’s part of me that thinks that there will be roles for expertise. But then again, how good will it get? Who knows? Will it eventually out compete us? …which is somewhat of a worry… but I do think that, at least currently, there’s still a role for the expertise to play a role. But you’re right, it’s going to just make it more efficient so that we can do more. And then the question is, if we’re doing more will we need fewer workers? Or will we just be more productive? All sorts of interesting questions there. I will say just a funny little story about this point about computer art and economics is our office is called the Center for the Advancement of Teaching. And our acronym is CAT. [LAUGHTER] So we make lots of jokes about that. And we have a serious logo. But all along we’ve been thinking like, we should have a fun, funny cat logo, like with an actual cat, we just haven’t had the money for it. We have this wonderful designer we work with, she’s amazing, we got a quote, we’re going to do it if we have money left over at the end of the year, but I was just playing around with Midjourney and like, what can it do for me? …and I mean, it’s not as good as hers will be, I don’t think, but it was pretty remarkable, especially since this is just a fun logo, it’s not like our serious logo, that I could just use it instead of paying somebody to do it. So this is the real sort of challenge is that it’s really maybe just the most advanced things that we’re still going to rely on experts for but maybe some of the basic stuff that we would have paid, we’re no longer going to do and what does that mean, economically?
Rebecca: There are some existing sources prior to AI that were like people who didn’t have degrees or didn’t have a background in design, who would whip up something for five bucks. [LAUGHTER] And sometimes it looks like it was whipped up for five bucks.
Betsy: [LAUGHTER] You probably would think that about my Midjourney examples. I’m sure Rebecca, I’m sure. Yeah. It’s so funny. So, this is the other thing too, is the students will eventually get better at this, like googling prompts. I went to this website that was like, “Here are these professional designers who design logos, ask it to do it in that person’s style?” Or “Here’s some language that you can use for the prompt like: vector, flat,” like, well, this sort of thing, or a mascot logo, which I didn’t even know that was a thing. But I guess if I want a cat logo, it’s a mascot logo, learning those things, which I never would have prompted, it actually helped me get something that was a little bit better. But it is fascinating. Yeah. And I think that’s true, in my experience with the tool in general is that the more you use it, the more you learn what it’s capable of. And I do think that a lot of our faculty have not really spent a lot of time experimenting with it for a variety of reasons. They’re busy, et cetera. But I often encourage them to really spend as much time as possible with it to really understand what it’s capable of doing. I was sharing with John, before we started this podcast, that plugins are now possible with ChatGPT. And the plugins just take it to a whole next level beyond even GPT4. And I’m still starting to play around with that. And I think it’s just something that, again, faculty need to be prepared for, because right now they’re saying, “Oh, it can’t cite things, or it can’t search the web.” Well, now it can. And what do we do about that? How do we keep up with it if we aren’t paying attention to it?
Rebecca: I think one of the things that you said earlier and alluded to when you were talking about the logo is needing expert language and expert concepts to be able to curate the prompts. So if students want to use tools like this in a productive way, they then also have to have a certain level of expertise, presumably, to do a good job. If we want to encourage students to be productive and use a productive tool in a productive way, what can we do to coach students? Do we want to coach students in this way?
Betsy: The question about whether we want to coach students is a really interesting one. There are folks, I think, who are anxious that if you teach them how to use it, they’ll use it in inappropriate ways. And my sort of response to that is we’re gonna have to address their desire to violate norms in a different way. That’s a different issue. That’s an issue of character. It’s an issue of ethics, because I think they are likely to do it anyway. Now it’s true, if they don’t really know how to do it, we might find it easier to detect it. But I’m guessing that, in a year, it’ll be harder and harder to detect it, even if they don’t know how to do it. But as of right now, I would say I think it is useful to teach them. I wasn’t teaching the spring, but I am teaching in the fall. And I’m really excited to think about. I’m not going to totally redesign my course, some people have done that, I’m not going to do radical changes, but just to engage in the conversation with them in the ways that it can be used. And I think some of the most important, honestly, are using it to explain material that they didn’t understand or using it to interpret my prompts if they’re weird, or my expectations. So helping the students use it to help them with their learning. So giving me feedback on my work, and I have a whole list of things that I would recommend, which is somewhat different than what we immediately think about when we think about students using ChatGPT. We think about them using it to write their papers, or to start, brainstorm, or give an outline. And all of those things might be great. But I actually think, as a person who’s interested in pedagogy, and particularly in student learning, there are only so many hours in the day, I have so many students, I can’t be with them one on one for 15 hours a week. But if there’s a way in which they can have like a tutor, who’s there with them to say, here’s what I think might be the explanation of that thing you didn’t understand, or let me help you interpret this paragraph and put it in the words for a sixth grader, or I’ll give you an analogy related to sports, if that’s what you know. [LAUGHTER] All of those things are amazing opportunities for our students to accelerate their learning. And that’s what we want. So it is true that these tools can be a threat to learning if students are just using them to write their papers in a literally copy and paste kind of way. But I also think there’s real opportunity to help them accelerate their learning. And again, you have to be careful, because it’s not perfect. And that’s your point about expertise. But I think, frankly, sometimes the advice they get from their friends, or if they go to the internet and Google it or YouTube, they’re not getting great advice either. So it doesn’t have to be perfect to be better than what they’re currently doing, is I guess what I would say? So I think that’s important. And then we could talk if you wanted to about how they might use it as a writing tool. I think it’s trickier there, of how we could ultimately and I’m sure you’ve heard this before in the previous ChatGPT podcast you did, it ultimately depends on your learning goals. What are your goals for your course, how you want to use it, but I do think there are certainly legitimate ways in which we can help students use it to help them learn more.
John: And just following up a little bit on that. I’ve heard of a number of faculty who are encouraging students to use it to create tutorials on specific topics where they may have a weaker background. And that’s certainly a very good potential use of this.
Betsy: And I even have experimented with like, “Okay, so give me feedback on this and then give me a learning plan,” like, “give me an improvement plan,” like, “what should I do? What steps should I take to get better at this skill?” …and it’ll actually give you pretty good plans. And it also can help them with time management, you know, “I have this many things I need to do help me prioritize what I should work on next.” And that’s good for us, [LAUGHTER], but it’s also really good for our students who really struggle with time management, I think. So I really do think there are a number of things that students can use it for, that I would feel comfortable with, but I also think it’s a really useful exercise for everyone listening or for any instructor to think through the possibilities. And you may decide these things are okay, these things aren’t okay, and it may differ for each class. But that is really important to do before you can actually communicate to your students what is and is not okay. And if you want them to actually do what you ask them to do, you have to have good reasons, I think. So you can’t just give them a rule, you should justify that rule, like with kids a little bit, you got to say why you think it’s the case to hopefully bring them along of why they wouldn’t want to just use the tool straightforwardly.
John: One thing I’ve used with some students, especially when I’ve talked to him about some of their uses of ChatGPT is, if all they’re learning in the course is how to type a prompt into ChatGPT and copy and paste that in, what types of skills are they acquiring there that’s going to be useful when they leave, because they could be replaced by anyone typing in those prompts.
Betsy: Right? What makes them unique. So one frame for this is the things we’re teaching students in school are useful for them. We want them to learn so that they can be productive in the market. That’s one way we often frame the work that we’re doing. But I think this gives us an opportunity to open it up a little bit wider where we think about the purposes of education beyond just what is going to be useful in the market. And so I sometimes will use the example of pottery… we’re coming back to art… is that I took a pottery class, after COVID, but it was like the first thing I wanted to do after we were back in person and so I took a wheel throwing class, and I was absolutely terrible. But the idea that I would go to Target and just cheat by bringing in something from Target that was made by a machine. No, there’s a reason I’m doing it. I want to actually learn the craft and the craft has meaning in and of itself, apart from the fact that yes, I could get a much better bowl [LAUGHTER] from Target than anything I will be able to create, but I’m really glad I’m doing it. And I think that’s really what’s gonna start happening is we’re going to start to see that there’s actually intrinsic value to some of these tasks, apart from their value for the, you know, am I gonna make more money later, etc, that we actually think that learning and thinking and the creativity that comes with producing is a value in itself. And that’s going to take a while to turn our students in that direction again, because there’s so market driven right now. But if things start changing in the market, and there are fewer and fewer jobs, they may be open to that conversation.
John: And maybe with the growth of alternative grading systems that try to shift the focus away from extrinsic rewards to intrinsic rewards, this could be quite complimentary.
Betsy: Yeah, and I know that we don’t want to talk all about academic honesty, but it is a real question. And so I don’t want to dismiss the faculty who are anxious about this. I was sharing before the podcast was recorded as well that the news about Chegg losing so much money in the past few months, was a real indicator to me that perhaps my optimism [LAUGHTER] about students not using it was ill placed, that in fact, that’s pretty good indirect evidence that a lot of students are now using ChatGPT to do what Chegg used to do for them. And so it’s not good for us, but it’s not good for the students. And so we do need to think about it. But I do think there’s sort of two broad approaches. One is like the punishment and enforcement approach, and then the other is prevention. And I think focusing on prevention is really where we need to go. And so referencing focusing on the intrinsic value of the work, maybe pulling away from those high-stakes graded assessments is a way to think about motivational changes of how we prevent students from engaging in these and I sometimes will use the example again, of the pottery class, like the idea that I would be motivated to cheat in a pottery class is absurd, like, why would I cheat at that class, because I’m just doing it for my sake. Now, if I was doing it so that I could get more money or so I could get this grade so that I could get into something else I wanted, then I might be willing and tempted to cheat in the pottery class. We know that students cheat because of the grade, they don’t cheat just because they think that’s the fastest way to learn. [LAUGHTER] They know they’re not learning, but they’re like, “I need this grade, because I need this degree so that I can get this job.” And so really bringing them back, decreasing that external stuff, and taking them back to the value of learning may be the only way we’re really going to tackle this. Now, it’s easier said than done. We’re all in a system where grades matter, and students need to get degrees and so it’s a longer conversation. But I do think revisiting some of the literature on cheating, even before ChatGPT existed is going to be really valuable for all of us.
Rebecca: So you’ve talked about moving towards more low-stakes opportunities, and we’ve hinted towards alternative grading, what are some strategies that faculty can use to assess student learning? We’re concerned that we’re not able to see whether or not a student is learning if they’re using tools like that, those are the conversations that we’re having.
Betsy: Yeah, so there’s things you can do to hopefully try to prevent it. But those may not always work. So I have a lot of ideas for how to prevent it. You can give them extrinsic reasons for using the tool itself. Like for example, I mean, this is just a simple one, but let’s say you’re teaching a math class, and you have an in-person final, and you tell them, “you’re going to be preparing yourself for the in person final by doing the homework yourself.” So there’s a kind of extrinsic reward of the final that’s in person will hopefully motivate the students to do the practice problems themselves, because they need to actually learn the thing that will get them that reward. But I do think if they do it, so again, lots of motivational things to talk about. But if they do it, first of all, how do we know is a really interesting concern. And I think that one interesting point that I’ve raised when I’ve had some conversations with folks is that I think a lot of people think when we’re talking about we need accurate assessments of student learning. The first assumption is that what we’re talking about there is we need to have grades that are just, so when we pass them on to like jobs, or to future courses, that we have just grades. But I actually think there’s a real learning reason why we want accurate assessments, is that if I can’t accurately assess your skills, you’re not going to learn. I actually want to know where you are really struggling, so then I can adapt my teaching to better help you learn. And if it looks like you’re doing great, I’m moving on, I’m moving on, I’m not going to actually help you learn that thing. And so it’s really important for learning as well that we have really accurate assessments of their skills. And so if they are using it, so how do we detect it? Tough one, but I think that’s where multiple measures. So you might imagine you have some in-class things that are happening, you’re not just lecturing. So this is a good reason for active learning as well. Because you’re engaging your students in class, you actually hear them speak in class and explain things to you in class. And if they’re struggling there, and then all of a sudden, they have this beautifully written paper, I think that’s a useful comparison. It’s no guarantee that that’s the case because sometimes students need time to reflect, particularly English as a second language learners need time to build their arguments, etc, rather than just being on the fly in class, but it is interesting evidence and that there’s people talking about oral exams and other possibilities or at least having conferences with the students about their work. So it’s not an exam, but just like let’s meet to chat about this. Now, of course, if you’re teaching a huge class, that’s not possible and available to you, but those that are teaching smaller classes, it might be. So I think we’re gonna have to be creative. I have not found a silver bullet here, I have heard lots of great ideas of things that could be possible, but all of them have trade offs, all of them come with downsides. And this is kind of my mantra all the time, when I think about pedagogy issues is that we should not get too absolutist about this, all of us are going to make different choices. And they’re all gonna have different downsides, and they’re all pretty reasonable, because right now, there is no obvious solution of what we all should be doing. I think some may choose to do oral exams, some may choose to do in-person, others may choose to say, “I’m not gonna pay as much attention as some others are.” And all of those things I think are reasonable. They’re just different approaches. And we should keep paying attention and be open to changing our minds if it seems like it’s not working. But I don’t feel like it helps us to be in sort of like one strong camp or the other when we think about the issues of academic honesty, and ChatGPT. So again, I don’t have an answer. Just lots of questions for you. But did you find anything that was useful over the past semester for you?
Rebecca: We teach really different things so our approaches are going to be very different. In my classes, we’re doing creative work. And so historically, and we continue to do this, documenting your process is part of the project. And so we see a project evolve over time. And that maybe involves using the use of AI as part of an input during that time, but documenting that as something that makes that happen. And we do critiques, we show things in progress, and we talk about it, and there’s feedback that’s recorded at those moments. And then if we’re not responding to feedback, then we’re not growing. So we have some systematic ways of demonstrating some creative process there and having to discuss and determine decision making around design decisions or creative decisions, like “Why did you make that decision?” And if it’s just like a random choice, then let’s be intentional about it. And now you need to maybe rethink that choice and make it more intentional. So those kinds of authentic learning opportunities really do kind of push it in a direction where it’s a lot more difficult to use AI as the entire thing. [LAUGHTER] It might be a part of the process, but it wouldn’t be the final output.
Betsy: John, I want to let you respond too, but what you’ve done is that one of the things about that, because that is certainly like doing authentic learning and process-based stuff. As you put it, Rebecca, it’s more difficult. But that doesn’t mean, and this is important, some people will say is that like, ChatGPT will tell you it’s process too, or you can ask it to give me processes, etc. So I do think, actually, one of the things that I think is I appreciate about your example is there’s a lot going on in class, there’s a lot going on, and it’s harder for folks who are doing asynchronous online courses. But if there are ways in which we actually see the process, and that’s kind of the authentic too, is that we’re actually not assessing a product, we’re literally live with them watching the process, I think we might be more likely to get some accurate things. And then if we just said, “Okay, we want you to write about your process, we actually want to see the process as most important.” So John, what about you,
John: The classes I’m most worried about are my large class, which has up to 400 students in it, and an online class that’s on the same topic with generally 40 to 50 students in it. And there’s some challenges there. In the large class, one of the things I’ve done since the start of the pandemic, is to shift all the assessment to online activities. I used to have a midterm and a final that were cumulative, they weren’t a tremendously large portion of the grade, there were lots of low-stakes tests that they could do over and over again. But the validity of those I suspect is going to be a bit different now. Because ChatGPT can do quite well with multiple choice questions and short answer questions and even algorithmic questions. So I’m probably going to bring back at least a midterm and final in person in my large class, just for the reason you described, the motivational thing… that you can practice these things as much as you want to learn it, but you’re going to be tested on this. And the greater your ability to recall and apply these concepts, the better you’ll be able to do on these things. And I wish I didn’t have to do that, because there’s so much advantage of letting students do things over and over again until they master things. But I’ve looked at some of the times on some of the quizzes I used this time, and students were turning them in [LAUGHTER] much more quickly than would have been possible had they not been relying on some sort of assistance.
Rebecca: Well, John, they’re just learning it so much better.
Betsy: Yes, that’s right. That’s right. That’s right.
John: And a nice side effect is you no longer get any spelling or grammatical errors.
Betsy: Yeah, you can read it faster as well.
John: Yeah, it makes it easier. [LAUGHTER]
Betsy: Yeah, no, and I do think as much as I think that we should trust our students, and I don’t want to be overly alarmist. There’s a lot of evidence that our students are doing it and even our students who would prefer not to do it, I think are doing it because they perceive that all the other students are doing it. So this was the same problem in the pandemic with academic honesty is that you have some students who will never cheat for whatever reason, [LAUGHTER] a small number… you have some students who will always cheat, they’ll find ways, they’ll pay somebody, whatever… they’re gonna find ways, and then there’s just a whole bunch of students in the middle who If the context really matters, and if they assume that all the other students are doing it, it puts them at a disadvantage to not do it. And we shouldn’t put our heads in the sand or assume that all of our students are not doing it. We shouldn’t also assume that our students are these horrible people, because they’re doing it, we need to recognize they’re doing it, and how can we help them create the conditions where they would be motivated to not do it to get themselves in trouble. And I do think your point, John, about the 400 students, about teaching an async online course, and even Rebecca, some of your description of what you’re doing in class. One thing that occurs to me, and I don’t have any illusions that this is going to happen, but I do think what these push against is our traditional model of how higher education happens. So we assume for the longest time that it was a lecture that took place. So that’s why 400 didn’t matter versus 20, we also assume that most of the learning would take place outside of class, because you would just come to a lecture and then you would go read the book and learn and teach yourself, basically. It’s kind of this old school model of like, the professor is just there to give you information, you’re going to teach yourself before the exams. And I think I can imagine a world in which, if we really want to see process, we need to be with our students more than three hours a week, and we need fewer students in the course. But that would be such a radical change to the economic model of higher education. I can’t imagine how expensive that would be. But it is more similar to K through 12. And in some ways, I think K through 12 folks have more of an advantage because they’re with the students so much more that they can actually watch them. And homework is less important. One of the most important things I always tell my students and have for years is that most of your learning will take place outside of class, and to emphasize that to them. And I think maybe now that creates a challenge because we’re not with them. And so we can’t sort of see whether they’re doing what we want them to do. So we really have to lean into the intrinsic motivation pieces of what is it that motivates them to want to do well, but with 400 students, they don’t know you really well, John, so they don’t feel guilty about like I have this relationship with my professor. It is tough. And I guess I would say on this point about academic honesty, and maybe we don’t have to keep talking about academic honesty. But I’ve seen a lot of faculty feel really guilty about their approach to this on both sides, like either they’re too overly harsh, or they have ignored it too much. And they’re super anxious about whether they’ve taken the right approach to academic honesty. And I think the most important thing I would say to instructors is this is really hard. Don’t beat yourself up about it, like you’re trying your best. And none of us have a perfect system. If we did, we’d be able to sell that, and it would be great. [LAUGHTER] We don’t have a perfect system. Some of us are maybe leaning one direction, and others are leaning in the other direction. And it’s really demoralizing when our students cheat, and then that makes us depressed as well. But also know that you’re not the only one that all of us have students who cheat and that’s unfortunately, part of the educational process. So do your best. [LAUGHTER] Pay attention. But don’t worry if it’s not a perfect outcome.
John: One of the things I was struggling with just recently as I was grading exams is how do I evaluate the work which is clearly the student’s own work, versus one that probably wasn’t the student’s own work. I don’t want to penalize students for actually trying.
Betsy: I think some people say like, “Ah, let’s just ignore it. It’s not my job to be a cop.” But I think the reason we want to actually do that is an ethical reason, which is that I don’t want the students who actually put forth the effort to be disadvantaged. So I think that’s the right impulse, John, yeah, for sure.
John: One thing I hope that doesn’t happen, though, is that we move to proctored exams online, and that we don’t move to more use of high-stakes in-person exams and so forth, because that would go against so many other things that we’ve been arguing in terms of equity and inclusion and so forth.
Betsy: Yeah, and also, these detectors. So TurnItIn, which most folks are using now, because many schools have TurnItIn, attached to their LMS. And so even before schools have had an opportunity to make a choice it’s default turned on. So your institution has to choose collectively to turn off the AI detector in TurnItIn. And so I think that’s important, too, to think about, like, are we just going to move to these detectors as a way of punishing students? And are they reliable enough? We don’t know. So there are all sorts of good equity questions. Actually, there’s a paper that I read in preprints, about how the detectors seem to flag international students more than those who speak English as a native language, in part because their grammar is better. [LAUGHTER] And so it’s more formulaic, because we’re teaching them the formula of how to speak English. And we need to be mindful of like, how do we balance these things,our equity concerns… and really they both are equity concerns, as you point out, John… so there are equity concerns about more high-stakes testing and in-person testing, etc. But if we just ignore it, there are also equity concerns for the students who do the work versus the probably the privileged kids who are just going to be like, “Whatever, I’m going to pay my $20 a month for GPT4, and be able to get the better answer to be able to use it.” So how do we come up with some sort of solution that balances those and we probably won’t be able to have… at least I don’t think there’s one… where there aren’t some harms? And so it’s really about like, which harms are we willing to tolerate while we work for a better solution? And that’s the hard part of ethical reasoning is that there’s not a solution where no one is harmed usually in these dilemmas.
Rebecca: One interesting thing you said like spending more time with your students, which I have the luxury of doing in a studio art space, we spend twice as much time with students for the same credit hours…
Rebecca: …which is valuable. We see process, we get to know our students really well, it’s a relational space [LAUGHTER] for forming relationships. And that really does change the dynamic. But that’s a really big time investment, while the cost of faculty in the spaces but also students of certain backgrounds, or if they have to work, it becomes much more difficult for them to take those kinds of classes, because they’re offered at particular times, and they’re longer, they’re harder to schedule their job around, and that kind of thing. So there’s equity issues in that space, too, as you’ve alluded to, about being in person.
Betsy: Yes, being in person, and then also the point about extra time. I was on talking about workload before. One interesting thing related to workload is that we know, from the research on student learning, that time on task increases learning. And so sometimes I think when we talk about making things accessible to students who are working 40, 50 hours a week, what we’re really doing is reducing the work that’s required of them, which is fine, if it’s just about getting the degree, which I think you can make interesting ethical policy arguments that that’s really important, because economically, it allows them to advance, etc. But if it’s about learning, we actually shouldn’t be reducing the amount of time they’re spending because they’re going to learn less. And so then there’s that tricky question of if we need students to spend 40 hours a week on school, what do we do? We have to compensate them so that they’re not having to work, there’s much larger policy issues at stake here beyond just like, well, we just got to expect them to buckle up and do it, they got to work 80 hours a week now instead. So these are all tough things. And in the context that we are in, where we don’t have those amazing, policy based governmental solutions in the United States, we have to make compromises. And we may say, “Well, maybe a little less work for the students who are working is the compromise we’re going to make for the greater good in this situation.” But recognizing it is true that they’re probably learning less if they’re not putting in the 40 hours. So but maybe with ChatGPT, we can speed it up, I don’t know. [LAUGHTER] So, interesting things about efficiency.
Rebecca: So, we always wrap up by asking “What’s next?”
Betsy: So I’ll speak to related to artificial intelligence a little bit. So I am, in July, going to the Council of Graduate School summer workshop, New Dean’s Institute. They’ve invited me out to share a little bit about ChatGPT, and I’m really excited to be thinking about what’s distinct about graduate education with respect to these tools, it kind of merges my interest in faculty use, as well as thinking about student use as well, for learning. In addition to that, we’ve been talking a lot about how to prepare for the fall, when the faculty come back. We were sort of… just like, COVID… sort of flying by the seat of our pants in the spring, like, here’s some things we’re gonna roll out for you, we’d like to be a little bit more intentional in the fall. And so as I’ve alluded to in this session, I really do think focusing on motivation for students is going to be really important instead of detection. And so we’re gonna do a reading group, we’re gonna go back to Jim Lang’s Cheating Lessons, which still holds up pretty well, actually. And we’re going to do a reading group of faculty on that. And then we’re also going to read the Grading for Growth book that’s just coming out in July, which we’re super excited about alternative grading. I’m teaching in the fall, as I said, so excited to actually try some of these things out and see if my ideas are actually practical [LAUGHTER] or not. And hopefully, I guess I just say, what’s next? I hope there’s some regulation. So we didn’t get into a lot of details about this, because we were focusing on teaching and learning. But I know Sam Altman, and Gary Marcus were before Congress. And I do hope that we actually see, unlike with social media, that we see some movement for some regulation about the development of these tools. So I think what we have now… fine, let’s figure out how to use them. But it’s really anxiety inducing to me that these tools will develop skills that nobody planned emergently like, it’ll just, “oh, now it has this new skill.” And the more that we build these tools out, we don’t actually know what we’re going to create. And I think [LAUGHTER] that is a little worrisome to me. And so I hope that what is next is more regulation on the tools.
John: We should note that we are recording this several weeks before it’s actually released. And we hope that at the time when this is released, [LAUGHTER] we haven’t reached that AI apocalypse that so many people have been worried about.
Betsy: That’s right. That’s good, John, thank you.
Rebecca: Well, thank you so much for joining us, Betsy. We always enjoy talking to you.
Betsy: Thanks for having me.
John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.
Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.