353. Beyond ChatGPT

Faculty concerns over student use of AI tools often focus on issues of academic integrity. In this episode, Marc Watkins joins us to discussion how the use of AI tools may have on student skill development. Marc is the Assistant Director for Academic Innovation at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers.

Show Notes

Transcript

John: Faculty concerns over student use of AI tools often focus on issues of academic integrity. In this episode, we explore other impacts that the use of AI tools may have on student skill development.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by

John: , an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Marc Watkins. Marc is the Assistant Director for Academic Innovation at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers. Welcome back, Marc.

Marc: Thank you guys. I really appreciate it. I think this is my third time joining you all on the pod. This is great.

Rebecca: Today’s teas are:… Marc, are you drinking any tea?

Marc: I am. I’ve gotten really into some cold brew tea and this is cold brew Paris by Harney and Sons. So very good on a hot day.

John: We have some of the non-cold brewed version of that in our office because the Associate Director of the teaching center enjoys that Paris tea so we keep it stocked pretty regularly. It’s a good tea.

Rebecca: Yeah.

John: My tea today is a peppermint spearmint blend.

Rebecca: Sounds nice and refreshing. I have a Brodies Scottish afternoon tea, and it’s hot. And it’s like 95 here. And I’m not really sure why I’m drinking hot tea in this weather. But I am. [LAUGHTER]

John: Well, I am here in North Carolina, and it’s 90 degrees. So it’s much cooler down here in the south, which is kind of nice. [LAUGHTER] And actually, it’s 71 degrees in this room because the air conditioning is functioning nicely.

Rebecca: Yeah, my studio at home… the one room where the air doesn’t work. So hopefully I don’t melt in the next hour.

John: So we’ve invited you here today to discuss your recent Beyond ChatGPT substack series on the impact of generative AI on student learning. Many faculty have expressed concerns about academic integrity issues, but the focus of your posts have been on how student use of AI tools might impact skill development. And your first post in this series discusses the impact of AI on student reading skills. You note that AI tools can quickly summarize readings, and that might cause students to not read as closely as they might otherwise. What are some of the benefits and also the potential harms that may result from student use of this capability?

Marc: When I first really got into exploring generative AI, really before ChatGPT was launched, there were a lot of developers working in this space, and everyone was playing around with openAI’s API access. And so they’re like, ”Hey, what would you like to build? And people would go on to Twitter, which is now X, and Discord and basically say, “I would like this tool and this tool.” And one of the things that came about from that was a reading assistant tool, which was called Explainpaper. And I think I first played around with this in the fall 2022, and then deployed with students in the spring of 2023. And the whole idea that I had with this and that design was to help students really plow through vast amounts of papers and texts, and so students that have hidden disabilities, or announced disabilities with reading and comprehension, and also students that were working on language acquisition, if you’re working in a second or third language, this type of tool can be really helpful. So I was really excited and I deployed this with my students in my class thinking that this is going to help so many students that have disabilities, that will go through a very challenging text, which is why I set it up as, and it did. The students initially reported to you that this was great. And I met with a lot of my students, and one of them said that she’d had dyslexia her whole life and never wanted to talk about it, because it was so hard and this tool for her was a lifesaver. And so that was great. But then the other part of the class basically said, “Hey, I don’t have to read anything at all ever.” And they don’t have any issues, they were just going to offload the close reading skills. And so I had to take a step back and say, “But wait, that’s not what we want this to actually happen. We want you to use this if you get into a pain point in your reading process, and not completely offload that.” So this really became this kind of a discovery on my part that AI can actually do that, it can generate summaries from vast amounts of texts. There are some really interesting tools that are out there right now: Google’s notebook LM, you can actually upload, I think, 4 million words of your own text to it in 10 different documents, and that will summarize and synthesize that material for you. And like the other tools we played around with the Explainpaper, it can change the summary that it’s generating for the actual document to your own reading level. So you could be reading a graduate level research paper, and you’d like it to be read in an eighth grade reading level, it will change the words and the language of that. So yeah, that could have helpful impacts on learning. It could also lead to a lot of de-skilling of those close reading skills we value so much. So that’s really how this started, was kind of coming up here too, and thinking about “Man, this was such a wonderful tool. But oh my gosh, how is this actually being used? And how has this been marketed to students through social media?”

Rebecca: How do you balance some of these benefits and harms?

Marc: By banging my head against the wall and screaming silently into a jar of screams?

Rebecca: I knew it.

Marc: Yeah, the problem with the jar of screams is every time I open it, some of the screams I put in there before escape before the new ones can come in. That’s a great question. So every single one of these use cases we’re gonna talk about today has benefits but also has this vast sort of terror of being offloading the skills that we would associate with them that are crucial for learning. The most important thing to do at this stage is just to make sure the faculty are aware that this can happen and that this is a use case, that’s the first step. Then the next step is building some friction into the learning process that’s already there. So for reading as an example, something that we do usually is assign close reading through annotation, whether that’s a physical pen and paper, or you could use digital annotation tools like Perusall or Hypothesis to help you go through that, that slows down that process if you’re using AI, and really focuses on learning. So when I say friction, it’s not a bad thing, and point of fact too friction… it’s actually sort of crucial for learning. The one challenge we’re faced with most of these tools is that they’re providing or they’re advertising a friction-free experience for students. And we want to say to them, “Well, you may not want to offload these skills entirely, you want to make sure that you do this carefully.” The main thing too I would think about this is I could never ban this tool even if I wanted to, because you don’t have any control over what students use to read outside of the three hours or so that you’d have in class with students a week. And it would be very beneficial for those students. So we can discuss to look forward to that had all those issues to use it. It’s just basically persuading them to use this in a way that’s helpful to them.

John: It reminds me a little bit of some of the discussions years back on the use of things like Cliff’s Notes for books and so forth, except now it’s sort of like a Cliff’s Notes for anything.

Marc: Indeed, Cliff Notes on demand for anything you want, wherever you want it, however you want it, too. And so how we could do that… what I’d said to my students at the time to kind of get them to be shocked of this is that, “You know, what would your reaction be if I used this to read your essays instead of going through and reading all of it and just giving a nice little generative summary” and one of my students said, “Well, you can’t do that. That’s cheating, you’d be fired.” And I had to explain to them, no one even really knows that this exists yet. There’s no rules. There’s no ethical framework. That’s something we’re going to have to come up with together, both faculty and students talking with each other about this.

Rebecca: It seems like the conversations you were having with students about how to maybe strategically use a tool like this, in this particular way, was an important part of harnessing the learning out of the tool, rather than the quote- unquote cheating aspect of the tool.

Marc: Oh, absolutely. Yeah, I mean, the thing we’ve been seeing with every single generative tool that’s been released too, whether it’s for text generation, or for augmenting reading, or doing some of the other use cases, we’ll talk here today, it does take a lot of time and effort on the part of the instructor to basically say, “Look, this is how this tool should be used to help you in this context in our classroom. How you use this outside of the classroom, that’s gonna be on you. But for our intents and purposes here, too, I would like to advocate that you use this tool this way. And here’s the reasons why.” Now asking every educator to do that is just too much of a lift, right? Because most of our folks are just so burnt out with everything else that they have to do. They’re focused on their discipline-specific concerns. They’re not really even on the radar, the fact that this technology exists, let alone how to actually deal with it. Trying to do part of the series is obviously advocating for people to be aware of it. But the next step is going to be building some resources to show how they can use things like annotation and why that matters. And a very quick way for teachers regardless of discipline to start using in their classes.

Rebecca: Your second post in this series examines the effect of AI tools on student notetaking skills. Can you talk a little bit about what might be lost when students rely on AI tools for notetaking and how it might be beneficial for some students as well?

Marc: Yeah, so a lot of the tools are using assisted speech generation software to actually record lecture like we might be using right now on this podcast and a lot of other people are too, and how they’re being marketed to students is just to sort of lean back, take a nap and to have the AI listen to the lecture for you. And some of the tools out there, I think one of them, it’s called Turbolearn.ai, will also synthesize the material, create flashcards for you, create quizzes for you, too. So you don’t have to do that processing part within your mind, which is the key thing. So, notetaking matters. In fact, it can be an art form. I’m not saying that our students treat notetaking like an art form either too, but there are examples of this that is somewhat of an artistic talent, because you as the listener are not just taking down verbatim what’s being said, you’re making these critical choices, these judgments to record what matters and put it in context of what you think you need to know. And that’s an important part of learning something. One thing that I did too as a student when I was in a community college in Missouri as a freshman, I volunteered as a note taker, and back then we did not have assistive technology. I had a pad of paper for myself for my notes and I had a pad of paper that had larger areas to write in for a student who was functionally blind. So I would do two notes at the same time. One in a font that was my size, one was a larger font that he could read with an assistive magnifying glass from one good eye that he had, it was shocking to me that this is what they did. So the first part of the class is do we have anyone who could help take notes? I was like, “Okay, sure I can.” And that’s how that student had notes for him. Obviously having a system like this in place helps those students so much more than having a volunteer notetaker go through this that’s rushing between one set of notes and another too. And using that in an effective way that’s critical, that is thoughtful about how you’re going to engage with it to, is meaningful for their learning versus just hanging back, sitting down letting the AI listen to you for lecture forty.

John: And another mixed aspect of it is the fact that it does create those flashcards and other things that could be used for some retrieval practice. That aspect, I think, could benefit a lot of students. And not all students maintain a very high level of focus and sometimes miss things. So I think there could be some benefits for everyone, as long as they don’t completely lose his skill. And I think maybe by reminding them of that, that could be useful in the same sort of way you talked about reading. But it’s a lot of things to remind students of. [LAUGHTER]

Marc: That’s lot of things to remind them of, too. And keep in mind, it’s a lot of temptation to offload the skills of learning to something that’s going to supposedly promise you to do that skill for you, or do that time-intensive skill for you too. I would love to have this employed in a giant conference somewhere. In fact, I’d love to go into the hallway of a conference and see all these transcripts come together at once in the overhead almost like you’re waiting for a plane flight at your airport, and you’re just seeing the actual material go through there too. That would be exciting for me too, to see what other people are talking about too… maybe I want to pop into this session and see that as well. So I think there’s tons of legitimate use cases for this. It’s just where’s the sort of boundaries we can put in place with this. And that’s true for almost all of this. I was talking to my wife last night, and I said, “When I was growing up, we had a go kart that a few kids in our neighborhood shared and it had a governor on the engine that made sure that the go kart wouldn’t go past 25 miles per hour, because then you’d basically die because it’s a go kart, it’s not really safe.” None of these tools or these technologies have a governor reducing their ability to impact our lives. And that’s really what we need. The thing that’s shocking about all this is that these tools are being released in the public as a grand experiment. And there’s no real use cases about or best practices about how you’re supposed to use this for yourself in your day-to-day life, let alone in education, in your teaching and learning.

Rebecca: I mean, anytime it feels like you can take a shortcut, it’s really tempting, the idea of turbo learning sounds amazing. I would love to learn really quickly. [LAUGHTER] But the reality is that learning doesn’t always happen quickly. [LAUGHTER] Learning happens from mistakes and learning from those mistakes.

Marc: Absolutely. It happens through learning through errors, it happens through learning through friction in many times. We don’t want to remove that friction completely from that learning process.

John: In your third post in the series, you talk about automated feedback and how that may affect both students and faculty. How does the feedback generated from Ai differ from human feedback and what might be some of the consequences of relying on AI feedback?

Marc: Well, so automated feedback is something that generative AI models, especially large language models, are very good at. They take an input based off of the students writing or assessment, and then the instructor can use a prompt that they craft to kind of guide the actual output of that too. So the system I used in the, I think spring of 2023, maybe it’s the fall of 2023 was MyEssayFeedback designed by Eric Kean. And he’s worked with Anna Mills before in the past too to try to make this as teacher friendly, as teacher centric, as possible, because I would get to design the prompts, my students would then be able to get feedback from it. And I use this in conjunction with asynchronous peer review, because it’s an online class. So they got some human feedback, and they got some AI feedback. The thing that was kind of shocking to me was that the students really trusted the AI feedback because it’s very authoritative. It was very quick, and they liked that a lot. And so I did kind of get into the situation where I wanted to talk with them a little bit more critically about that, because some of the things I was seeing behind the scenes is that a lot of the students kept on cueing the system over and over again, they’d get one round of feedback from the tool, they would try to go back and using air quotes right now so your audience can see this “fix” their essay. And my whole point is their writing is not broken. It doesn’t need to be fixed. And generative AI is always going to come up with something for you to work on in your essay. And one student I think went back seven or eight times saying “Is it right now? Is it perfect?” And the AI would always say something new. And she got very frustrated. [LAUGHTER] And I said “I know you’re frustrated, because that’s how the AI is. It’s not smart, even though it sounds authoritative, even though it’s giving you some advice that is useful to you. It doesn’t know you as a writer, it doesn’t understand what you’re actually doing with this piece.” So that crucial piece of AI literacy, knowing that what the limitations are too, is a big one. I think also when you start thinking about how these systems are being sold, in terms of agentic AI, we’re not there yet. None of these systems are fully agentic. That involves both strategic reasoning and long-term planning. When you can see that being put in place with students and their feedback, that can become very, very scary in terms of our labor for faculty to understand that, because there are some examples of some quirky schools, I think it’s the Health Academy in Austin’s one of them that have adopted AI to both teach and provide feedback for students. And I know there’s some other examples too, that talk about the AI feedback being better than human feedback in terms of accuracy. And that is something that we are going to have to contend with. But when I provide feedback for my students, I’m not doing it from an aggregate point of view, I’m not doing it to try to get to the baseline, I want to see my student as a human being and understand who that writer is, and what that means to them. That’s not saying that you can’t have a space for generative feedback, you just want to make sure you do so carefully and engage with it in a way that’s helpful for the students.

John: And might that interfere with student’s development of their own voice in their discipline?

Marc: I think so. And I think the question we don’t have an answer to yet is what happens when our students stop writing for each other or for us and start writing for a bot? What happens when they start writing for a robot? That’s probably going to change their voice and also maybe even some of their ideas and their outlook on the world too, in ways that I’m not all that comfortable with.

Rebecca: It does seem like there’s real benefits to having that kind of feedback, especially for more functional things like grammar and spelling and consistency and that kind of thing. But when you lose your voice, or you lose the fresh ways of saying things or seeing things in the world. [LAUGHTER] you lose the humanity of the world, [LAUGHTER] like it just starts to dissipate. And to me, that’s terrifying.

Marc: It’s terrifying to me too, to say the least. And I think that’s where we go back into trying to find, where’s the line here? Where do we want to draw it? And no one’s doing it for us. We’re having to come up with this largely on our own in real time.

Rebecca: So, speaking of terrifying [LAUGHTER] and lines, you note about how large language models are developing into large multimodal models that simulate voice, vision, expression, and emotion. Yikes. How might these changes affect learning, we’ve already started digging into that.

Marc: Yeah, so this is really about both Google’s demo, which is I think called Project Astra and also openAI’s demo, which is GPT4 omni. Half of the GPT4 omni model is now live for users, you can use the old version of the large language model too for resources, but the other half is live streaming audio and video. And the demo used a voice called Sky that a few people, including Scarlett Johansson, said “that sounds an awful lot like me.” And even the creator of openAI, Sam Altman, basically said that they were trying to go for that 2013 film Her where she started as the chatbot to Joaquin Phoenix. And basically, this is just the craziest thing I can ever think of. If openAI goes through with the promise of this, it will be freely available and rate limited for all users. And you can program the voice to be anything you want, whenever you want. So yes, it’s gonna be gross and creepy, there’s probably going to be people that want to date Sky or whoever it is. But even worse than that, there will probably be people who want to program this to be a political bot. And they only want to learn from a liberal or conservative voice, if they only want a voice that is of their values and their understanding of the world. If they don’t like having a female teacher, maybe they only want a male voice talking to them. Those are some really, really negative downstream effects of this that go back into how siloed we are right now with technology anyway, that you can now basically create your own learning experience or your own experience, and filter the entire world through it. We have no idea what that’s going to do to student learning. Sal Khan thinks that this is going to be a revolution, he wrote about this in Brave New Words. I think that this is going to be the opposite of that. I think it’s going to be more chaotic. I think it’s also going to become, for us as teachers, very difficult to try to police in our classes, because at my understanding of this, this is a gigantic privacy issue. If your students just come up and you’re having a small group discussion or anything else that’s going on too, and one of them activates this new multimodal feature in GPT4 omni and there are voices streaming, they’re talking to the Chatbot and everything else, anything that goes into that is probably going to be part of its training data in some way, shape, or form. Even Google’s demo of this using Project Astra, part of the demo was actually having someone walk around a room in London, they had stopped on a computer screen that was not the actual person’s computer screen and it had some code running for encryption and it read the encryption out loud. It said what it was. So there’s some big time issues that are coming up here too. And it’s all happening in real time. We don’t even have a chance to basically say, “Hey, I don’t really want this,” versus “Oh, this has now been updated. I now have to contend with this live in my own life and in my classes.”

John: Going back to that issue of friction that you mentioned before, Robert Bjork and others have done a lot of work on the issue of desirable difficulties. And it seems like many of these new AI tools that are being marketed to students are designed to eliminate those desirable difficulties. What sort of impacts might that have in terms of student learning and long-term recall of concepts.

Marc: I love desirable difficulties too, and I think that’s a wonderful framing mechanism outside of AI to talk about this too, and why learning really matters. I think the downstream consequences that if this is widely adopted by students, which I think a lot of tech developers want this to happen, and we don’t see this sort of sporadic usage. which we’re seeing right now… to be clear to your audience, not every student is adopting this, not everyone’s using this, most of them are really not aware of it. But if we do see this widespread adoption of this, too, it is going to have a dramatic impact on the skills we associate with reading, the skills that we associate with creating model citizens who are critical thinkers and ready to go into our role to actually participate in them. If we really do get to the situation where they use these tools to offload learning, we’re kind of setting up our students for being uncritical thinkers. And I don’t think that’s a good idea.

Rebecca: Blah. [LAUGHTER] Can you transcribe that, John? [LAUGHTER]

John: I will. I had to do a couple of those. [LAUGHTER]

Marc: Well, blah is always a great version of that. Yeah. [LAUGHTER]

Rebecca: I only have sound effects.

John: One of the transcripts mentioned “horrified sound” as the transcript.[LAUGHTER]

Rebecca: I think that’s basically my entire life. These are the seeds of nightmares, all of them… seeds of giant nightmares.

Marc: Well, I think the thing too, that’s so weird about this is that, yes, and this is kind of getting into the dystopia version of it, but there’s clearly good use cases for these tools, if you can put some limitations on it. And if the developers would just sort of pause and think not just as someone wanting to make money, but as someone who would use this tool to actually learn or be useful to their lives, what areas do they want to design to actually preserve that sort of human judgment, that human sort of friction in learning is going to be meaningful for that going forward?

Rebecca: Yeah, guardrails and ethics would be great.

Marc: Absolutely.

Rebecca: So a number of these tools are also designed to facilitate research. What’s the harm? What harm might there be when we rely on AI research tools more extensively, and get rid of that human judgment piece?

Marc: Yeah, I think one of the tools I used initially was Elicit and Elicit’s probably the most impressive research tool that’s currently available. It is expensive to use, so it’s hard to sort of like practice using it now. It was free initially. Consensus AI, I think is the best ChatGPT plugin that you can use through the custom GPT store. But what Elicit does is it goes through hundreds, if not 1000s, of research papers, and it automates the process of reading those papers for you, synthesizing that material, and giving you an sort of aggregate understanding of the state of knowledge, not just within your research question, but perhaps even in your field of research you’re trying to acquire. So you’re basically offloading the process of research, which for a researcher to do that, takes hundreds upon hundreds of hours of dedicated work, and you’re trusting an algorithm that you can’t audit, you can’t really ask how it came up with its response. So yes, it’s a wonderful tool, when it works and when it gives you an accurate response. Sometimes the responses are not accurate in the least. And if you haven’t read the material too, it’s very difficult to sort of pick up on where the machine is making an error. So yeah, there’s a lot of issues if we just uncritically adopt using this tool, versus if you sort of put some ground rules and ethics about how to use this, to support your research, to support your learning as well. And I think that’s what we want to try to strive for with all of these. And research is just one level of that.

Rebecca: We all have our own individual assumptions that we make when we do things, many of which we’re not aware of. But when we’re relying on tools like this, there’s many more layers of assumptions that we might not be aware of that are built into the software or into the tools or in the ways that it’s doing its analysis or synthesis that I think seems particularly concerning to me.

Marc: Yes, the bias, the sort of hidden biases that we’re not even aware of. And then the developers I don’t think are aware of either, too, is another layer that we can go into and think about this. I say that layer, because this really is like an onion, you peel back the layer, there’s another layer there, another layer, another layer, you’re just trying to get to the point where it’s not so rotten anymore. And it’s very difficult to do because the way that this has been shaped to do is to just accelerate those human tasks as quickly as you can to reduce as much friction as possible, so that you can just sit back and get a response as quickly as you can from this. And in a lot of ways the marketing of this basically describes this as almost like magic. Well, it’s not magic, it’s just prediction and using massive amounts of compute to get you to that point as well, but there are some serious consequences, I think, to our learning if we just uncritically adopt that.

John: Going back a bit, though, to early in my career, I remember the days of card catalogs and indexes where you had to read through a lot of material to find references. And then finding more recent work was almost impossible unless you happen to know of colleagues doing this work at some other institution, or you had access to the working papers of other institutions because of connections. The fact that we have electronic access to these files, and you don’t have to wait a few weeks for one to be mailed to you, or go through interlibrary loan. And that we can do searches and get indexes or get abstracts, at least for these articles, takes us a long way forward. And one other thing is that I do subscribe to Google Alerts in some of my popular papers. And then I occasionally, maybe once every month or so when I see some new ones, I’ll just look at the article and about half the time the person who cites the article gets it wrong, they actually refer to it in a context that’s not entirely relevant. I think in some ways, maybe relying on an AI tool that generates some summaries of the articles before people add them to their bibliography or footnotes, might actually, in some cases, improve the work. Going back again to the early days, one of the things I enjoyed most when I was up there in the periodical sections of the library were the articles around the ones that I was looking for, they’d often lead to some interesting ideas. And that doesn’t come up as much now when you’re using an online search tool, but as you’ve noted all along, we have both benefits and costs to all this. And in this issue, I’m kinda thinking some of the benefits might be worth some of the costs, as long as people follow through and actually read the articles that seem relevant.

Marc: I think that’s the key point too. So long as this leads you to where you want to go. That’s just like what Wikipedia basically is, that’s a great starting point for your research, it just leads you back to the primary sources to actually go in there and read to do it. The challenge that I think we see, and this is what it goes back down to where we go back that onion sort of analogy, is that a lot of the tools that are out there now …I think one of them is called ProDream AI or something like this… will not only find the sources for you, but then it will draft the lit review for you as well. So you don’t have to go through that process of actually reading it. And obviously, that’s where we want to pause and say this isn’t a good idea. But I agree with you completely. John, we are in a digital age, we have been for over 25 years now too. And in fact, when I students is: “This was a terrible experience because I can’t navigate this thing. This is just so horrible for me to do.” And yet every time I’ve done this with the AI research for my students, the interface design is much more easy for them to actually establish and look at sources and go through this and think about it, and part is because the algorithm is now using some of those techniques to actually narrow down their sources too and help them identify them as well. So yeah, there’s definitely benefits to it. It’s not all black and white, for sure.

Rebecca: There’s a lot of gray. [LAUGHTER] I think one of the things that you’re hinting at too is this difference between experts using a tool and novices or someone who’s learning a set of skills. And the way that these tools are designed, an expert is going to be able to use a tool and have a judgment call about whether or not what’s provided is accurate, helpful, relevant, etc. Whereas a novice doesn’t know what they don’t know. And so it becomes really challenging for them to have the information literacy skills that may be necessary to negotiate whether or not this is a path to follow or not. For me, that’s one of the biggest differences when we’re talking about using these tools in a learning context versus using these tools in a professional context are ways to save time to get to the point or get to an end result more swiftly.

Marc: Oh, absolutely. I think that thinking about the audience who’s using it too: a first-year true freshman student, using a tool like this versus a third-year PhD student working on their thesis is a totally different audience, totally different use case. For the most part, the PhD student hopefully has that literacy needed to effectively use these tools already, they might still need some guidance, might need some guardrails and some ethical framing for this too, but it’s a very different situation from that freshman student. I think that’s why most faculty aren’t thinking about how they’re using these tools, because they already have many of those skills already solidified. They don’t need to have a refresher course necessarily on research because they’ve done this now for a large part of their career. For their perspective, adopting these tools is not going to necessarily de-skill them, it might just be necessarily a timesaver in this case.

Rebecca: And what skills we’re offloading to a tool. Some things are just repetitive tasks that take a long time that a tool is great to solve. Just a kind of a waste of time versus really like critical thinking or kind of creative aspects of maybe some of the work we do.

Marc: The tool I want, and I think this exists, I just haven’t found it yet is when I’m trying to write a post and instead of trying to search for the URL to go into the actual title that automatically just finds the URL for me to click on it. I’ll review it for a second, because it takes me so much time finding the URL for the page when I’m doing either a newsletter or I’ve tried to update a website, that would be amazing. Those are some of the things that we could use really easily to cut down on those repetitive tasks, for sure.

John: In your six post in this series, you talk a little bit about issues of ethics. And one thing that I think many students have noted is that many faculty have extremely different policies in terms of when AI is allowed, if it’s allowed, and under what conditions it’s allowed, which creates a lot of uncertainty, and faculty aren’t always very good at conveying that information to students. What should we be doing to help create perhaps a more transparent environment for our students?

Marc: Well, I think transparency is the key word there. We want to, if we’re using these tools for instructional design, be transparent about what we’re using this to, just to model that behavior for our students. So if I develop a lesson plan or use a slide deck that has generated images, I want to clearly identify what part of AI was in that actual creation and talk about why that matters in these situations. What concerns me is that these tools are being turned on left and right for faculty without any sort of guides or best practices about that. I actually asked for Blackboard to have a feature built in with a new AI assistant, so it could identify what was AI generated with a click of a button. There’s no reason why you can’t build something that tracks what was generated by AI within the learning management system. And the response that I’ve gotten to is: “Who basically cares about that?” Well, I kind of care about that, and I care about this for the effects we’re trying to do for our students as well. But yeah, I think adopting a sort of stance of transparency as a clear expectation, both for our own behavior and our students behavior is going to be more meaningful than turning to sort of an opaque AI detector that’s only going to give you a percentage about if this is aggregated content or human content or completely misses the entire situation and misidentifies a human being as AI or vice versa. And that’s something I think we want to focus on as being that human in the loop situation here too. And really not offloading ethics in this casein just trying to teach it. It is hard to do that when the technology is changing rapidly before your very eyes, though. And that’s what this has felt like now for the last two years, I think.

Rebecca: You’re really concerned when faculty lean on an AI detection tool as the only way of identifying something that might be AI generated or an academic integrity violation of some sort. Can you talk a little bit about the effectiveness of these tools, and when they might be useful and when they might not be useful?

Marc: Yeah, to me, they’re not very reliable in an academic context, there’s far too many false positives. And more importantly, too, the faculty that employ them, for the most part, aren’t really trained to actually use them. So some universities have invested in academic misconduct officers, academic honesty officers, or whatever you call them, for offices of academic misconduct, where they actually have people who are trained to both use these tools and provide this to faculty. I might be a so-called expert at AI, again, I’m gonna use air quotes here too, because I’m self taught like everyone else is. But I don’t think I would be comfortable in an academic based conduct investigation, trying to use these tools, which I barely understand how they work, trying to come up with a case for students to do so. The few areas that I’ve looked at that have engaged AI detection, do so as part of a process. And that process is just one part of the AI detector, they have independent advocates usually coming in talking with the students and talking with the faculty member, they don’t go to taking students up on charges at the first step, they often try to look at a restorative process to see if that’s possible. So if the first instance of a student using this technology, they would sit down, and they would be like a third party between the instructor and the student, and talk about if something could be repaired within the relationship. And if the student would acknowledge that an ethical breach actually happened here, not rule breaking, but an ethical breach that has damaged this relationship. And can that relationship be basically restored in some way. So to me, that’s the gold standard of trying to do this, that takes a whole bunch of resources to set up, lots of training, lots of time, versus let’s buy an AI detector for our entire university, turn this on and here’s a little one-page guide about how to use it. And that, to me, has set up this recipe for just chaos in the world too. And it doesn’t matter what detector you’re using. They all have their own issues. And none of them are going to ever give you a complete picture of what’s going on with that student. And I think the big challenge we’re seeing too is that we’re moving well beyond AI detection into some pretty intense surveillance. We’ve got some companies going to stylometry and going through keystroke logging, tracking what was copied and pasted into a document, when it was copied and pasted to. And these are all interesting novel techniques to try to figure out what was written and who wrote it, but they also have some downstream consequences, especially if they don’t involve training. I can imagine certain faculty using that time stamping technique to penalize students by not spending enough time on their writing, whether there is AI in it or not, they’re looking at: “you only spent two hours on this essay that was assigned over two weeks, that’s not showing me all you’ve learned, other students spent 5, 6, 7,12, 14 hours on this. So I think we have to be really careful about what comes online these next few years, and really approach it critically, just like we are asking our students to, so that we don’t look for a solution for this problem that’s based on technology.

John: One of the things you discuss in this essay, though, is the use of digital watermarking, such as the work that Google has been doing with synthID. Could you talk a little bit about how that works, and what your thoughts are about this.

Marc: So watermarking has been sort of on the perpetual horizon in AI for a long time. I think Scott Aaronson, he teaches at the University of Texas, he has been working with open AI for the last two or three years, he has really been very vocal about his own research into watermarking. And supposedly, he has a watermarking system at OpenAI working in the background, they just have not deployed it in public. Google’s synthID is not just for text, it’s for images, it’s for audio, it’s for video. And it’s really designed for what our world is going to very soon look like when you can have an AI that makes the President say anything, do anything, and deal with this vast amounts of misinformation and disinformation, too. And so what synthID is is their actual watermarking technique, and watermarking starts at the source of the generation. So their model was Gemini. And when watermarking comes online, it uses cryptography to put a code into the actual generation, whether that’s a picture, a video, music, or text that can only be deciphered from a key that they actually have. And so watermarking is this really interesting technique that it can be used to try to identify what was made by machine versus a human being. The challenge is, the last time I checked, there’s almost 70 different models on the market now that use multimodal AI or large language models. And those are the only ones I’ve been tracking, I’m sure there’s probably hundreds of others that are small that people have been developing, Google’s synthID model is specific to Google’s products, all the other watermarking schemes will be absolutely specific to OpenAI or Microsoft or Anthropic or any other companies. So it’s going to be the situation where you’re going to use a tool, then you have to rely on the tool to give you a classification if this is accurate or not. And from what I’ve also read, it’s pretty easy to break, because you can feed it into an opposing system’s AI or an open model. And it will simply rewrite it, removing the actual code in that process. So I don’t think watermarking is going to be a long-term solution, I do think it’s a good first step towards something that we can actually do. But it’s just a little bit too chaotic right now in the space. And we would need some massive sort of multinational treaties with different countries who don’t like to talk with us to try to get a sort of unilateral watermarking scheme in place that everyone will agree upon. And then we’d all have to cross our fingers that that key would never be released to the public. Because if that ever happened, that’s when the whole sort of house of cards falls apart.

Rebecca: So that’s kind of a fantasy.

Marc: …kind of a fantasy, but part of this stuff, I think, is marketing based. So like Google wants their products to be both safe and secure. You can’t have that safety and security unless you have some sort of system between there. And that’s what synthID is. I think that it can possibly work for audio, for video, and even for images. I think text is a lot more fungible than anything else, because it’s very easy to start copying and pasting things out there too. It’s also easy to write as yourself as a human being into a document. And that becomes very difficult to sort of gauge what was human versus AI using a watermarking type of program like this.

Rebecca: The final post in your series addresses the use of generative AI tools to design instructional content and activities. Instructors often find the use of AI tools to be very useful for these purposes, even if they ban it for their students. What concerns do you have about relying on AI tools in this context?

Marc: My concern there: “AI for me, not for you. It makes perfect sense to me going forward.” Yeah, obviously we go back into this phase of trying to model ethical behavior using the tools too and understanding why this matters. If you’re going to use a tool to grade or design rubrics, you want to be open about it. You want to be attributing what you use this tool for too, because your students are going to be looking at you and seeing “Well, how are you using this in your job? How am I going to be using this in my job when I graduate from here too?” That’s the actual grounding framework we can do for this for our students and for ourselves. If we can think about that and do that, then we don’t have to necessarily rely on technology as being the sole solution for this, we can start talking about “this is the ethical behavior I’m modeling for you too, this is the ethical behavior I expect from you too. Let’s work together and think about what that means.” Now, that’s not always going to be the solution for this situation, some students are going to listen to that, other students are going to smile at you and go back and happily generate away and try to get past it. But the fact is, we do have that agency in our part too. And that is something I think we should be leaning into right now. Because the connections we’re developing with our students too are, as of this time, still human-to-human base, for the most part. I want to value that and use that to try to persuade them on an ethical pathway.

Rebecca: Modeling our use of technology leads to so many different interesting conversations with students. I know that when I’ve talked about using assistive technology in my classes, having something to read to you if you’re having trouble focusing or using some of these tech tools to solve barriers that you’re facing in getting your work done. And sharing the ways that you use tools to do the same can be really helpful in leading to student success. So I can see how doing the same thing when it’s an AI product is relevant. I know that I used AI to generate a bunch of little case studies for one of my classes and I just told the students that that’s what I did… fed it in a prompt, and I made some tweaks to it, but this is where it came from. And they found it really interesting, and we ended up having a really interesting conversation about when it might be most relevant to use particular tools and when maybe it’s not as wise to use a particular tool, because it isn’t actually helping you in any kind of way. Or it’s defeating the learning, or it’s not really creating a good product in the end.

Marc: That’s a wonderful use case. I mean, sitting down there talking with them and saying how I use this, why use this, let’s get into discussion about this, maybe even a debate about that, too, is part of the learning process. And I’m glad you focused on the fact that about the assistive technologies, I want my students to use this technology if they need to, they don’t need to announce that they have a disability. We need to really be focusing on this fact, for education beyond. At our university, they have to go through a very formalized process to be recognized by the Office of Student Disabilities. It’s very expensive, it’s time consuming, that is out of reach for the vast majority of students, even if they felt comfortable going out there and advocating for themselves that way or if they had parents or other resources to do that. I want to design my classes so that students are aware that these tools exist, that they can use them and that they can be able to trust them to hopefully be able to use this in a way that is effective to their learning too and to trust them for that. That’s what I want. Now if that’s going to happen is another case, indeed. But that’s going to take time. The one thing I will say too, and I think that something that popped up here at a recent story that I read is that professors were moving from a point of despair to anguish with this technology, and I want us to avoid that more than anything else. Because that’s not the sort of stance we need to be taking for ourselves when we deal with this technology with our students. We can navigate this, it’s just going to take a lot of time and a lot of energy. And I hope administrations of various institutions are listening to that too, that they really need to focus on the training aspect of this technology, both for students and for actual teachers. This isn’t just something you flip a switch and turn on and say: “You guys now have AI, go learn how to use it…” that has been a recipe for disaster for it.

Rebecca: It’s definitely a complex topic, because there’s so much hope for equity in some of these tools, especially for students with disabilities. But then there’s also the really scary parts too. [LAUGHTER] So finding that balance, and making sure that both enter conversations when we’re having conversations about AI, I think, is really important. And I appreciate that today we’ve done that, that we’ve talked about some of the scary aspects, but also there’s some real benefits to having these tools available to our students and incorporating them and really having deep and meaningful conversations about them.

Marc: Absolutely. I think that one of the most powerful things I’ve done from the AI Institute is when you can get a skeptic and an early AI adopter at the same table together talking about these things back and forth. You really do see how people come out of their sort of silos and their positions and they can kind of come together and say “Yes, this is an actual use case or two. This is actually meaningful. This is good. How do I make sure that I can put some boundaries on this for my own students and their learning?”

John: So, we always end with a question which is so much on everyone’s mind concerning AI, and that is: “what’s next?”

Marc:Well, what is next indeed? So I think we’re all holding our breath to see if OpenAI is going to fulfill its promise and asking if they’re going to turn on this new multimodal system that lets you talk with it, lets it see you, because they have not done so yet. So we have a little bit of time. But that is going to be on everyone’s mind this fall if they do so. Because having an AI that can listen to you, talk with you, and have a voice that you get to program it, is going to be a new set of challenges that we have not really come up with yet.

John: Well, thank you. This has been fascinating, and your series is wonderful. And I hope that all faculty think about these issues, because a lot of people are focusing on a very narrow range of issues and AI is going to affect many aspects of how we work in higher ed.

Marc: Thank you, John. Thank you, Rebecca. This has been great too. And hopefully I’ll be putting some more resources into that series [LAUGHTER] when I have a chance to do so here.

John: And we will include a link to your substack in the show notes because you’ve got a lot of good information coming out there regularly.

Marc: Thank you.

Rebecca: Well, thanks for joining us. We hope to talk to you again soon.

Marc: I appreciate it. Thank you guys.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

311. Upskilling in AI

With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, Marc Watkins joins is to discuss a program that incentivizes faculty development in the AI space. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers.

Show Notes

Transcript

John: With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, we examine a program that incentivizes faculty development in the AI space.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Marc Watkins. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers. Welcome back, Marc.

Marc: Thank you, John. Thank you, Rebecca. It’s great to be back.

Rebecca: We’re glad to have you. Today’s teas are:… Marc, are you drinking tea?

Marc: I am. I have a Cold Brew Hibiscus, which is really great. It’s still very warm down here in Mississippi. So it’s nice to have something that’s a little bit cool. That’d be refreshing.

Rebecca: That sounds yummy. How about you, John?

John: I am drinking a peppermint spearmint tarragon blend today. And it’s not so warm here. In fact, my furnace came on for the first time yesterday.

Rebecca: Yeah, transitions. And, I have English tea time today.

Marc: Well, that’s great.

John: So we have invited you here to discuss your ongoing work related to ChatGPT and other AI tools. Could you first describe what the AI Institute for Teachers is and its origins?

Marc: Sure, I think that when I was last a guest here in January of this year on your show. And it seems like 1000 years ago [LAUGHTER], but during that spring semester, I really took a much deeper dive than the original pilot with a lot of the generative AI tools in the fall. And we started noticing that the pace that big tech was deploying these tools and integrating these with existing software from Microsoft and Google was only accelerating. So in about April or May, I went to my chair, Stephen Monroe, and said, “I think we need to start training some people to get them prepared for the fall,” because we kind of thought that fall was going to be what it is right now, which is a chaotic just sort of mash up of sort of everything you can imagine that some people dive in deeply, some people tried to ban it, some people are trying to do some critical approaches with it too. So we actually worked with the Institute of Data Science here at the University of Mississippi, and we got some money. And we were able to pay 23 faculty members $1,000 apiece to train them for a day and a half about everything we knew about Generative AI, about AI literacy, ethics, what tools were working in the classroom, which wasn’t. And their whole goal was to go back to their home departments over the summer and serve as ambassadors and help prepare them for the fall semester. And we started that, we’ve had funding for one Institute, and now we’re doing workshops, and searching, as we all will, for more funding for doing,

Rebecca: How did faculty respond to (A) the incentive, but (B) also [LAUGHTER] the training that went with it?

Marc: Well, not surprisingly, they responded really well to the incentives, where you can pay people for their time, they generally do show up and do so as well. We had quite a few people wanting to take the training both internally from the University of Mississippi and then people started finding out about it, because I was posting it out on Twitter, and writing about it on my substack. So when we had interest from graduate students in Rome, interest from other SEC schools wanting to attend, and even interest from a community college in Hawaii. Definitely seen a lot of interest within our community, both locally and more broadly, nationally.

Rebecca: Did you find that faculty were already somewhat familiar with AI tools? I had an interesting conversation with some first-year students just the other day, and we were talking about AI and copyright. And I was just asking, “Hey, how many of you have used AI?” And I and another faculty member indicated that we had used AI to make it safe to indicate. And many of them really kind of shook their heads like “No, they hadn’t,” and they were unsure. And then I started pointing to places where we see snippets of it, in email and in texting and other places where there’s auto-finishing of sentences and that kind of thing. And then they’re like, “Oh, yeah, I have seen that. I have engaged with that. I have used that.” What did you find faculty’s knowledge?

Marc: Extremely limited. They thought of AI as ChatGPT. And one of the things we did with the session was basically frame it out as “Look, this was not just going to remain as a single interface anymore.” One of the things that actually happened during the institute that was completely wild to me was the last day, I woke up that morning. And I’d signed up through Google Labs, and you can do it as well, to turn on the features within the Google suite of tools, including in search and Google Docs, and Sheets and everything else. And they gave me access that last day, right before we began. And so I literally just plugged in my laptop and said, “This is what it’s going to look like in Google docs when you have generative AI activate in Google Docs. it pops up and immediately greets you with a wand with a phrase “Help me write.” And what I tried to explain to them and explained to faculty ever since then, is that it makes having a policy against AI very difficult when it shows up at an existing application with no indication whatsoever that this is in fact Generative AI. It’s just another feature that’s in the application that you have grown up with, from many of our students’ perspectives their entire lives. So yeah, we need to really work on training faculty, not just in the actual systems itself, but also getting them outside of that mindset that AI that we’re talking about is just ChatGPT. It’s a lot more than that.

John: Yeah, in general, when we’ve done workshops, we haven’t had a lot of faculty attendance partly because we haven’t paid people to participate [LAUGHTER], but what’s been surprising to me is how few faculty have actually explored the use of AI. My experience with first-year students was a little different than Rebecca, about half of the students in my large intro class had said that they had explored ChatGPT, or some other AI tool. And they seem pretty comfortable with it. But faculty, at least in our local experience, have generally been a bit avoidant of the whole issue. I think they’ve taken the approach that this is something we don’t want to know about, because it may disrupt how we teach in the future. How do you address that issue, and getting faculty to recognize that this is going to be a disruptive technology in terms of how we assess student learning and in terms of how students are going to be demonstrating their learning, and also using these tools for the rest of their lives in some way?

Marc: That’s a great question. We trained 23 people, I’ve also been holding workshops for faculty too, and again, the enthusiasm was a little bit different in those contexts, too. And I agree that faculty, I feel like they feel overwhelmed and maybe some of them want to ignore this and don’t actually want to deal with it, but it is here and it is being integrated at phenomenal rates in everything around us too. But if faculty don’t come to terms with us, and start thinking about engagement with their technology, both for themselves and for their students, then it is going to create incredible disruption that’s going to be lasting, it’s not going to go away. We’re also not going to have things like AI detection, like it is with plagiarism detection to come in and save the day for them too. And those are all things we’ve been trying to very carefully explain to faculty and get them on board. Some of them though, just aren’t there yet, I understand that. I empathize, too. This is a huge amount of time that you spend on these things to think about and talk about as well. And we’re just coming out of the pandemic, people are exhausted, they don’t want to deal with another, quote unquote, crisis, which is another thing that we’re seeing too. So there’s a lot of factors that are at play here that make faculty engagement, less than what I’d like to see.

Rebecca: We had a chairs’ workshop over the summer, and I was somewhat surprised based on our experience with other interactions with faculty, how many chairs had used AI. The number was actually a significant number. And most of them were familiar. And that to me was encouraging [LAUGHTER], it was like, “Okay, good, the leaders of the ship are aware. That’s good, that’s exciting.” But it’s also interesting to me that there are so many folks who are not that familiar, who haven’t experimented, but seem to have really strong policies around AI use or this idea of banning it or wanting to use detectors, and not really being familiar with what they can and cannot do.

Marc: Yeah, that’s very much what we’re seeing across the board too, is that the first detectors that I’m aware of that really came online, I think, for everyone was basically GPTZero, there are a few others that existed beforehand to IBM had one called the Giant Language Testing Lab. But those were all based on GPT-2, you’re going back in time to 2019. I know how ridiculous is it to go back four years in technology terms and think about this… that was a long time ago. And we really started adopting that through education or seem to be adopted in education based off of that panic. The problem is in incidents of education putting a system like that in place, it’s not necessarily very reliable. TurnItIn also adopted their own AI detector as well too. A lot of different universities began to explore and play around with it, I believe, and I don’t want to be misquoted here or misrepresent TurnItIn. I think what they initially came out with it, they were saying there was only 1% false positive rate for detecting AI. They’ve since raised that to 5%. And that has some really deep implications for teaching and learning. Most recently, Vanderbilt Center for Excellence in Teaching and Learning made the decision to not turn on the AI detection feature in TurnItIn. Their reasoning was that they had, I think, in 2022 some 75,000 student papers submitted. If they had the detector on during then that would give them a false positive grade about 3000 papers. And they just can’t deal with that sort of situation through a university level..No one can. You’d have to go through it investigating each one. You would also have to get students a hearing because that is part of the due process. It’s just too much. And that’s one of the main concerns that I have about the tools that it’s just not reliable in education.

John: And it’s not reliable both in terms of false positives and false negatives. So some of us are kind of troubled that we have allowed the Turnitin tool to be active and have urged that our campus shut it down for those very reasons, and I think a number of campuses, Vanderbilt was one of the biggest ones, I think to do that, but I think quite a few campuses are moving in that direction.

Marc: Yes, the University of Pittsburgh also made the decision to turn it off. I think several others did as well, too.

Rebecca: It’s interesting, if we don’t have a tool to measure, a tool to catch if you will, then you can’t really have a strong policy saying you can’t use it at all. [LAUGHTER] There’s no way to follow up on that or take action on that.

Marc: Where we’re at, I think, that for education, that’s a sort of conundrum. We’re trying to explain this to faculty. I think much more broadly, in society, though, if you can’t have a tool that works when you’re talking about Twitter, I’m sorry, X now, and understanding if the material is actually real or fake, that becomes a societal problem, too, and that’s what they’re trying to work on with watermarking. And I believe the big tech companies have agreed to watermark audio outputs, video outputs, and image outputs, but they’ve not agreed to do text outputs, because text is a little bit too fungible, you can go in and you can copy it, you can kind of change it around a little bit too much. So, definitely it’s gonna be a problem, too when state governments start to look at this, and they start wondering that the police officer taking your police report is writing this with their own words, the tax official using this as well, too. So it’s gonna be a problem well outside of education.

Rebecca: And if we’re not really preparing our students for that world in which they will likely be using AI in their professional fields, then we’re not necessarily doing our jobs and education and preparing our society for the future.

Marc: Yeah, I think training is the best way to go forward too and again, going back to the idea of intentional engagement with the technology and giving the students these situations where they can use it and where you, hopefully if you’re a faculty member, you actually have the knowledge and the actual resources to begin to integrate these tools and talk about the ethical use case, understanding what the limitations are and the fact that it is going to hallucinate and make things up, and to think about what sort of parameters you want to put on your own usage too.

John: One of the things that came out within the last week or so, I believe,… we’re recording this in late September… was the introduction of AI tools into Blackboard Ultra. Could you talk a little bit about that?

Marc: Oh boy, yes indeed, they announced last week that the tools were available to us in Blackboard Ultra. They turned it on for us here at the University of Mississippi, and I’ve been playing around with it, and it is a little bit problematic, because for right now, what you can do is with a single click, it will scan your existing materials in your Ultra course and it will create learning modules. It will create quiz questions based off that material, it will create rubrics, and will also generate images. Now compared to what we’ve been dealing with ChatGPT and all these other capabilities, this is almost a little milquetoast by comparison. But it’s also an inflection event for us in education, because it’s now here, it’s directly in our learning management system, it’s going to be something we’re going to have to contend with every single time we open up to create an assignment, or to do an assessment. And I’ve played around with it. It’s an older version of GPT. The image version I think is based on Dall-E, so you would ask for a picture of college students and you get some people with 14 fingers and weird artifacts all over their face, which may not be the one that would actually be helpful for your students. And while the other learning modules there are not my thinking necessarily, it’s just what the algorithm is predicting based off the content that exists in my course. We have that discussion with our faculty, we have them cross that Rubicon on and saying, “Okay, I’m worried about my students using this, what happens to me and my teaching, my labor, if I start adopting these tools. There could be some help, definitely, this could really streamline the process, of course creation and actually making it aligned with the learning outcomes my department wants for this particular class.” But it also gets us in a situation where automation is now part of our teaching. And we really haven’t thought about that. We haven’t really gotten to that sort of conversation yet.

Rebecca: It does certainly raise questions about, obviously, many ethical questions and really about disclosing to students what has been produced by us as instructors and what has been produced by AI and authorship of what’s there. Especially if we’re expecting students to [LAUGHTER] do the same thing.

Marc: It is mind boggling, the cognitive dissonance, with having a policy and saying “No AI in my class,” then all of a sudden, it’s there in my Blackboard course, and I could click on something. And, at least at this integration of Blackboard, they may very well change this, but once you do this, there’s no way to natively indicate that this was generated by AI. You have to manually go in there and say this was created. And I value my relationship with my students, it’s based off of mutual trust. I think almost everyone in education does. If we want our students to act ethically, and use this technology openly, we should expect ourselves to do the same. And if we get into a situation where I’m generating content for my students and then telling [LAUGHTER] them that they can’t do the same with their own essays, it is just going to be kind of a big mess.

John: So given the existence of AI tools, what should we do in terms of assessing student learning? How can we assess the work reasonably given the tools that are available to them?

Rebecca: Do you mean we can just use that auto-generated rubric right, that we just learned about? [LAUGHTER]

Marc: You could, you can use the auto-generated rubric separately from Blackboard. One of the tools I’m piloting right now is the feedback assistant, it was developed by Eric Kean and Anna Mills. I consulted with them on this, too. She’s very big on the AI space for composition. It’s called MyEssayFeedback. And I’ve been piloting this with my students. They know it’s an AI, they understand this. I did get IRB approval to do so. But I’ve just got the second round of generated feedback, and it’s thorough, it’s quick, it’s to the point. And it’s literally making me say, “How am I going to compete with that?” And maybe the way is that maybe I shouldn’t be competing with that, maybe it’s I’m not going to be providing that feedback. But then maybe then I should be providing my time in different ways. Maybe I should be meeting with them one on one to talk about their experiences, maybe that way. But I think you raise an interesting question. I don’t want to be alarmist, I want to be as level-headed as I can. But from my perspective, all the pieces are now there to automate learning to some degree. They haven’t been all hooked up yet and put together a cohesive package. But they’re all there in different areas. And we need to be paying attention to this.Our hackles need to be raised just slightly at this point to see what this can do. Because I think that is where we are headed with integrating these tools into our daily practice.

Rebecca: AI generally has raised questions about intellectual property rights. And if our learning management systems are using our content in ways that we aren’t expecting, how is that violating our rights or the rights that the institution has over the content that’s already there.

Marc: A lot of perspectives of the people that I speak with too, their course content, their syllabi, from their perspective is their own intellectual property in some ways. We get debates about that, about the actual university owns some of the material. But we have had instances where lectures were copyrighted before in the past. And if you’re allowing the system to scan your lecture, you are exposing that to Generative AI. And that gets at one aspect of this. The other aspect, which I think Rebecca is referring to is the issue with training this material for these large language models itself could indicate that it was stolen or not properly sourced from internet and you’re using it and then you’re trying to teach your students [LAUGHTER] to cite material correctly too, so it’s just a gigantic conundrum of just legal and ethical challenges. The one silver lining in all this, and this has been across the board with everyone in my department. This has been wonderful material to talk about with your students, they are actually actively engaged with it, they want to know about this, they want to talk about it. They are shocked and surprised about all the depths that have gone into the training of these models, and the different ethical situations with data and all of it too. And so if you want to just engage your students by talking to them about AI too, that’s a great first step in developing their AI literacy. And it doesn’t matter what you’re teaching, it could be a history course, it could be a course in biology, this tool will have an impact in some way shape or form in your students’ lives they want to talk about, I think maybe something to talk about is there are a lot of tools outside of ChatGPT, and a lot of different interfaces as well, too. I don’t know if I talked about this before in the spring, the one tool that’s really been effective for a lot of students were the reading assistant tools, one that we’ve been employing is called ExplainPaper. They upload a PDF to it, it calls upon generative AI to scan the paper and you can actually select it to whatever reading level you want, then translate that into your reading level. The one problem is that students don’t realize that they might be giving up some close reading, critical reading skills to it as well too, just like we do with any sort of relationship with generative AI. There is kind of that handoff and offloading of that thinking, but for the most part, they have loved that and that’s helped them engage with some really critical art texts that normally would not be at their reading level that I would usually not assign to certain students. So those are helpful. There are plenty of new tools coming out too. One of them is called Claude 2 to be precise by Anthropic. That just came out, I think, in July for public release, it is as powerful as GPT-4. It is free right now, if you want to sign up for it as well too. The reason why I mentioned Claude is that the context window, what you can actually upload to it is so much bigger than ChatGPTs. I believe their context window is 75,000 words. So you can actually upload four or five documents at a time, synthesize those documents. One of the things I was using it for as a use case was that I collected tons of reflections for my students this past year about the use of AI. It’s all in a messy Word document. It’s 51 pages single spaced. It’s all anonymized so there’s new data that identifies them. But it’s so much of a time suck on my time, just go through to code those reflections. And I’ve just been uploading to Claude and having it use a sentiment analysis to point out what reflections are positive from these students, in what way, and it does it within a few seconds. It’s amazing.

John: One other nice thing about Claude is that has a training database that ends in early 2023. So it has much more current information, which actually, in some ways is a little concerning for those faculty who were trying to ask more recent questions, particularly in online asynchronous courses, so that ChatGPT could not address those. But with Claude’s expanded training database, that’s no longer quite the case.

Marc: That’s absolutely correct. And to add to this rather early discussion about AI detection, none of the AI detectors that I’m aware of had time to actually train on Claude, so if you generated essay… and you guys are free to do this on your own, your listeners are too… if you generated and essay with Claude, and you try to upload that to one of the AI detectors, very likely you’re going to get zero detection or a very low detection rate for it too, because it’s again, a different system. It’s new, the existing AI detectors have not had time. So the way to translate this is don’t tell your students about it right now, or in this case, be very careful about how you introduce this technology to your students, which we should do anyway. But this is one of those tools that is massively popular, a lot of people just haven’t known about it because, again, ChatGPT just takes up all the oxygen in the room when we talk about Generative AI

John: What are some activities where we can have students productively use AI to assist their learning or as part of their educational process?

Marc: That’s a great question. We actually started developing very specific activities for them to look at different pain points for writing classes. One of them was getting them to actually integrate the technology that way. So we built a very careful assignment, which called on very specific moves for them to make both in terms of their writing, and their integration of the technology for that. We also looked at bringing some research question, building assignments that way. We have assignments from my Digital Media Studies students right now about how they can use it to create infographics. Using the paid for version of ChatGPT Plus, they can have access to plugins, and those plugins then give them access to Canva and Wikipedia. So they can actually use Canva to create full on presentations based off of their own natural language and use actual real sources by using those two plugins in conjunction with each other. I just make them then go through it, edit it with their own words, their own language too, and reflect on what this has done to their process. So lots of different examples, too, I mean, it really is limited only to your imagination in this time, which is exciting, but it’s also kind of the problem that we’re dealing with, there’s so much to think about.

Rebecca: From your experience in training faculty, what are some getting started moves that faculty can take to get familiar enough to take this step of integrating AI by the spring?

Marc: Well, I think the one thing that they could do is, there are a few really fast courses. I think it’s Ethan Mollick from even from the Wharton School of Business put out a very effective training course that was all through YouTube, I think it’s like four or five videos, very simple to take, to get used to understanding how ChatGPT works, how Microsoft’s Bing works as well too, and what sort of activities students can use it for, what sort of activities faculty could. Microsoft has also put out a very fast course, I think takes 53 minutes to complete about using generative AI technologies in education. And those are all very fast ways of basically coming up to speed with the actual technology.

John: And Coursera has a MOOC through Vanderbilt University, on Prompt Engineering for ChatGPT, which can also help familiarize faculty with the capabilities of at least ChatGPT. We’ll include links to these in the show notes.

Marc: I really, really hope Microsoft, Google and the rest of them calm down, because this has gotten a little bit out of control. And integration of these tools are often without use cases, they’re often waiting to see how we’re going to come up and use them too. And that is concerning. Google has announced that they are committed to releasing their own model that’s going to be in competition with GPT4, I think it’s called Gemini by late November. So it looks like they’re just going to keep on heating up this arms race and you get bigger models, more capable and I think we do need to ask ourselves more broadly what our capacity is just to keep up with this. My capacity is about negative zero at this point… going down further.

John: Yeah, we’re seeing new AI tools coming out almost every week or so now in one form or another. And it is getting difficult to keep up with. I believe Apple is also planning to release an AI product.

Marc: They are. They also have a car they’re planning to release, which is the weirdest thing in the world to me, that there could be your iPhone charged in your Apple Car.

John: GM has announced that they are not going to be supporting either Android or Apple CarPlay for their electric vehicles. So perhaps this is Apple’s way of getting back at them for that. And we always end with the question, what [LAUGHTER] is next, which is perhaps a little redundant, but we do always end with that.

Marc: Yeah, I think what’s next is trying to critically engage the technology and explore it not out of fear, but out of a sense of wonder. I hope we can continue to do that. I do think we are seeing a lot of people starting to dig in. And they’re digging in real deep. So I’m trying to be as empathetic as I can be for those that don’t want to deal with the technology. But it is here and you are going to have to sit down and spend some time with it for sure.

John: One thing I’ve noticed that in working with faculty, they’re very concerned about the impact of AI tools on their students and student work. But they’re really excited about all the possibilities that opens up for them in terms of simplifying their workflows. So that, I think, is a positive sign.

Rebecca: They could channel that to help understand how to work with students.

Marc: I hope they find that out, there’s a positive pathway forward with that too.

John: Well, thank you. It’s great talking to you and you’ve given us lots more to think about.

Marc: Thank you guys so much.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

305. 80 Ways to Use ChatGPT in the Classroom

Faculty discussions of ChatGPT and other AI tools often focus on how AI might interfere with learning and academic integrity. In this episode, Stan Skrabut joins us to discuss his book that explores how ChatGPT can support student learning.  Stan is the Director of Instructional Technology and Design at Dean College in Franklin, Massachusetts. He is also the author of several books related to teaching and learning. His most recent book is 80 Ways to Use ChatGPT in the Classroom.

Show Notes

Transcript

John: Faculty discussions of ChatGPT and other AI tools often focus on how AI might interfere with learning and academic integrity. In this episode, we discuss a resource that explores how ChatGPT can support student learning.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by

John: , an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Stan Skrabut. Stan is the Director of Instructional Technology and Design at Dean College in Franklin, Massachusetts. He is also the author of several books related to teaching and learning. His most recent book is 80 Ways to Use ChatGPT in the Classroom. Welcome, Stan.

Stan: Well, thank you ever so much for having me on. I have been listening to your podcast since the first episode, you guys are crushing it. I recommend it all the time to my faculty. I’m excited to be here.

John: Thank you. And we very much enjoyed your podcast while you were doing it. And I’m hoping that will resume at some point when things settle down.

Rebecca: Yeah, we’re glad to have you here.

Stan: Yeah, thanks.

John: Today’s teas are:… Stan, are you drinking any tea?

Stan: A little bit of a story. I went over to the bookstore with the intent of getting tea. They had no tea in stock. I went to the vending machine on the same floor. The vending machine was down. I went to another building. I put in money. It did not give me tea. I’m stuck with Mountain Dew. I’m sorry. [LAUGHTER]

Rebecca: Not for lack of trying. Clearly. [LAUGHTER]

Stan: I tried. I tried.

Rebecca: I have some blue sapphire tea.

John: And I have Lady Grey.

Rebecca: You haven’t drink that in a while John,

John: no. [LAUGHTER]

Rebecca: Little caffeine today huh. [LAUGHTER]

John: Yeah well i am back in the office, I’ve returned from Duke and I have more options for tea again.

Rebecca: That’s good. So Stan, we invited you here today to discuss 80 Ways to Use ChatGPT in the Classroom. What inspired you to write the book?

Stan: Well, I’m an Instructional Technologist and my responsibility is to help faculty deliver the best courses possible. And in November 2022, ChatGPT came onto the scene and in December, faculty are up in arms, “Oh, my goodness, this is going to be a way that students are going to cheat and they’ll never learn anything again.” And as an instructional technologist, I see technology as a force multiplier, as a way to help us do better things quicker, easier. And so I didn’t feel threatened by ChatGPT. I’ve been looking at the horizon reports for the last 20 years. And they said, “AI is coming. It’s coming. It’s coming. Well, it’s here.” And so it was just a matter of sitting down in January, write the book, publish it, and provided a copy to all the faculty and we just started having good conversation after that. But the effort was that we should not ban it. That was the initial reaction; that this is a tool like all the other tools that we bring into the classroom.

Rebecca: Stan, I love how you just sat down in January and just wrote a book as if it was easy peasy and no big deal. [LAUGHTER]

Stan: Sell, I will have to be honest, that I was using ChatGPT for part of the book, it was a matter of I asked ChatGPT kind of give me an outline, what would be important for faculty to know about this, so I got a very nice outline. And then it was a matter of creating prompts. And so I’d write a prompt and then I would get the response back from ChatGPT. It was a lot of back and forth with ChatGPT, and I thought ChatGPT did a wonderful job in moving this forward.

John: Most of the discussion we’ve heard related to ChatGPT is from people who are concerned about the ability to conduct online assessments in the presence of this. But one of the things I really liked about your book is that most of it focuses on productive uses by both faculty and students and classroom uses of ChatGPT because we’re not always hearing that sort of balanced discussion about this. Could you talk a little bit about some of the ways in which faculty could use ChatGPT or other AI tools to support their instruction and to help develop new classes and new curriculum?

Stan: Yeah, absolutely. I guess first of all, I would like to say that this is not going anywhere. It is going to become more pervasive in our life. Resume Builder went out and did a survey of a couple thousand new job descriptions that employers were putting out. 90% of them are asking for their employees to have AI experience. As higher education, it’s upon us to make sure that the students that are going out there to be employees know how to use this tool. With that said, there has to be a balance. In order to use the tool properly, you have to have foundational knowledge of your discipline. You have to know what you’re talking about in order to create the proper prompt, but also to assess the proper response. With ChatGPT sometimes it doesn’t get it right… just how chat GPT is built, it’s built on probabilities that these word combinations go together. So it’s not pulling full articles that you can go back and verify, kind of like the human mind has been working. We have built up knowledge all these years. My memory of what happened when I was three, four or five years old is a little fuzzy. Who said what? I’m pretty confident what was said. I’m pretty confident, but it’s still a little fuzzy. And I would need to verify that. So I see ChatGPT as an intern, everybody gets an intern, now. They do great work at all hours, but you as the supervisor still have to verify the information is correct. Back to the classroom, students can’t or should not, or regardless of who’s using it, should not just hit return on a prompt, and then rip that off and hand it in to their supervisors or instructor without verifying it, without making it better, without adding the human element to working with the machine. And that is, I think, where we can do lots of wonderful things in the classroom. You know, from the instructor side of go ahead and use this for your first draft. Now turn on the review tools that track changes and show me how you made it better, as you’re working towards your final product. Instructors can go ahead and craft an essay, craft out some supposedly accurate information from ChatGPT. tThrow it in the hands of the students and say: “Please, assess this. Is this right? Where are the policies? Where are the biases? Tell me where the gaps are. How can we make this better?” And using it to assess it.” Those are some initial ways to start asking students or using it in the class. I don’t know if I’m tapping into all the things. There’s just so many things that you could do with this thing.

John: And you address many of those things in the book. Among those things that you address was having it generate some assignments, or even at a more basic level, having it develop syllabi, or course outlines and learning objectives and so forth, for when faculty are building courses.

Stan: Oh, absolutely. We have a new dean at our School of Business. And he came over and wanted to know, “Tell me a little bit more about ChatGPT, how we can use this. They’re looking at creating a new program for the college. And it’s like, “Well, let’s just start right there.” What are the courses that you would have for this new program and provide course descriptions, titles, and descriptions? Here comes the list of 10, 12 different courses for that particular program. Okay, let’s take this program, what are the learning outcomes for this particular program? So we just copied and pasted, asked for learning outcomes, here comes the list of outcomes. Now for these different outcomes, provide learning objectives. And it starts creating learning objectives. And so you can just continue to drill down. But this moves past the blank page. Normally you’d bring in a group of faculty to work on that program, what are your ideas and send everybody off, and they would pull ideas together and you would start crafting this. This was done in 30 seconds. And now okay, here’s the starting point for your faculty. Where are the problems with this? How can we make it better? Now go. Instead of a blank page, starting with nothing? That was one example. But even for your course, using ChatGPT, having a course description, you can ask it to say, provide me a course plan for 16 weeks. What would I address in this? What would be the different activities? Describe those activities. If you want it to have the activities use transparent assignment design, it’ll craft it in that format. It knows what transparent assignment design is, and it will craft it that way. And then going back to assessment, you can build content. So looking at that OER content, open education resources, that it can get you a jumpstart on that OER content. What are gaps that I want or taking content that’s there and localizing it based on your area to say here we are in New England, Massachusetts, specifically, I need an example. Here’s the content that we’re working with. Give me an example, a case study, and it will craft a case study for you. It allows you to go from that zone of drudgery to your zone of genius very rapidly. I’ve been working on a new book, and got down to the final edits, and I was like, “Oh, I’m missing conclusions to all these different chapters.” I just fed the whole chapter in and said, “Could you craft me a conclusion to this chapter?” And it just knocked it out. I mean, I could do it. But that’s my zone of drudgery, and I’d rather be doing other things.

Rebecca: It’s interesting that a lot of faculty and chairs and administrators have been engaged in this conversation around ChatGPT quite a bit, but many of them haven’t actually tried. ChatGPT. So if you were to sit down with a faculty member who’s never tried it before, what’s the first thing you’d have them do?

Stan: This is an excellent question because I do it all the time. I have a number of faculty members that I’ve sat down, looked at their courses and say, “What is the problem that you’re working with? What do you want to do?” And that’s where we start. We say “What is the problem that you’re trying to fix?” ChatGPT version three had 45 terabytes of information it was given. They say the human brain has about 1.25 terabytes. So this is like asking thirty-some people to come sit with you to work on your problem. One class was a sports management class dealing with marketing. And they were working with Kraft enterprises that has the Patriots, and working on specific activities for their students and developing marketing plans and such. We just sat down with ChatGPT and started at a very basic level to see what we could get out of it. And the things we weren’t happy with, we just rephrased it, had it focus on those areas, and it just kept improving what we were doing. But, one of the struggles that I hear from faculty all the time, because it’s very time consuming, is creating assessments, creating multiple choice questions, true and false, fill in the blank, all these different things. ChatGPT will do this for you in seconds. You feed all the content that you want, and say, “Please craft 10 questions, give me 10 more, give me 10 more, give me 10 more. And then you go through and identify the ones you like, put them into your test bank. It really comes down to the problem that you’re trying to solve.

John: And you also know that it could be used to assist with providing students feedback on their writing.

Stan: Absolutely

John: …that you can use it to help generate that. Could you talk a little bit about that.

Stan: We’re right now working with the academic coaches. And this is one of the areas to sit down. I’m also not only the Director of Instructional Technology and Design, but also my dotted line is Director of Library. So I’m trying to help students with their research. And the writing and the research go hand in hand. So from the library side, we look at what the students are being assigned, and then sit down and just start with a couple key terms or phrases, keywords that we want and have ChatGPT to give us ideas on these different terms. And it’ll provide ten, twenty different exciting ideas to go research. Once again, getting past the blank page. It’s like “I gotta do an assignment. I don’t know what to do.” It could be in economics, I don’t know what to write about in economics, it’s like, well, here pull these two terms together, and what does it say about that?” So we start at that point. And then once you have a couple ideas that you want to work with, what are some keywords that I could go and start researching the databases with, and it will provide you these ideas. It’ll do other things, it’ll draft an outline, it’ll write the thing if you want it to, but we try to take the baby steps in getting them to go in and research but getting pointed in the right direction. On the writing side, for example, I have a class that I’m going to be teaching at the University of Wyoming to grad students. I’m going to introduce ChatGPT. It’s for program development and evaluation, and I’m going to let them use ChatGPT to help with this. One of the things that academic writers struggle with is the use of active voice. They’re great at passive, they’ve mastered that. Well, this will take what you’ve written and say, “convert this to active voice” and it will rewrite it and work on those issues. I was working with one grad student and it was after playing with ChatGPT a couple of times, she finally figured out what really was the difference and how to overcome that problem and now she is writing actively, more naturally. But she struggled with it. With ChatGPT, you can take an essay, push it up into ChatGPT and say, “How can I make this better?” And it will provide guidance on how you can make it better. You could ask it specifically, “How can I improve the grammar and spelling without changing any of the wording here.” It’ll go and check that. So for our academic coaches, because there’s high volume, this is another tool that they could use to say, “Here’s the checklist of things that we’ve identified for you to go work on right away,” not necessarily giving solutions, but giving pointers and guidance on how to move forward. So you can use it at different levels and different perspective, not where it does all the work for you but you could do it incrementally and say, “here assess this and do this.” And it will do that for you.

Rebecca: Your active and passive voice example reminds me of a conversation I had with one of our writing faculty who was talking about the labor that had been involved previously of making example essays to edit of to work on writing skills. And she just had ChatGPT write things that [LAUGHTER] are of different qualities, and to compare and also to do some editing of as a writing activity in one of her intro classes.

Stan: Absolutely. What I recommend to anyone using ChatGPT is start collecting your prompts, have a Google document or a Word document, and when you find a great prompt, squirrel it away. Some of the workshops that I’ve been giving on this, I demonstrate high-level prompts that are probably two pages long that you basically feed this basic information to ChatGPT and it talks everything about the information that you’re going to be collecting, how you want to collect it, how you want it to be outputted, what items are you going to output, and you’re basically creating this tool that you can then call up and say, for example, developing a course, that it will write the course description, give you a learning outcomes, recommended readings, activities, and agenda for a 16 week, all in one prompt. And all you do is say “this is the course I want” and let it go. It’s amazing what problems that we can build this tool just like we build spreadsheets, we build these very complex spreadsheets, to do these tasks. We can do the same with Chat GPT, we just have to figure out what the problems we’re trying to solve.

John: Our students come into our classes with very varied prior preparation. In your book, you talk about some ways in which students can use ChatGPT to help fill in some of the gaps in their prior understanding to allow them to get up to speed more quickly. Could you talk about some ways in which students can use ChatGPT as a personalized tutor,

Stan: I’m going to take you through an example that I think can be applied for students. A student comes to your class. Ideally, they’re taking notes, one of the strategies that I use is I have my notebook, I’ll open my notebook, and I’ll turn on otter.AI, which is a transcription program. And I will go over my notes, I will basically get a transcription of those notes, I can then feed that transcription into ChatGPT and say clean it up, make a good set of notes for me. And it will do that. And then I can build this document and then I can review what we did in class, build a nice clean set of notes, and have that available to me. Over a series of setw of notes, I could do the same thing by reviewing a textbook and highlight and talk about, transcribe key points of the textbook or I can cut and paste. And then I can feed that information into ChatGPT and say, “Build me a study bank that I can build a Quizlet, for example, or I need to create some flashcards on what are the key terms and definitions from this content?” Here you go. Create some flashcards from that material. It could be that no matter how great the instructor is, I still don’t get it. They introduced a term that is just mind boggling, and I still don’t get it. And so I can then ask ChatGPT to explain that at another level. They talk about non-fiction, some of the best non-fiction books or the most popular that are out there getting on the bestsellers list, they’re written at a certain grade level. And I know that I write typically higher than that grade level, I can go ask ChatGPT to rewrite it at a lower grade level. I could, as a student, ask ChatGPT, to give an explainer at a level that I do get to understand. Those are certain ways that you can do this. And you basically can build your own study guides that have questions that have examples of all the materials, so you can feed that material in and get something out, just enhance it. And I think for faculty, this is also an easy way to create good study guides, that you can get the key points and build the study guides a lot easier, just going with the blank page and trying to craft it by hand, can be very difficult. But if you already have all your material, you feed it in there, and then say here, let’s build a study guide out of this year with some parameters, definitely much more useful.

Rebecca: We’ve talked a lot about how to use ChatGPT as an individual, either as an instructor or as a student. Can you talk a little bit about ways that instructors could use ChatGPT for in class exercises or other activities?

Stan: Absolutely. And I’m sorry, some of the examples other folks have actually contributed first, and I saw him and I thought they were just brilliant, but I don’t have their names right in front of me. So I apologize ahead of time. But as an instructor, I would invite ChatGPT into the classroom as another student. We call it Chad, Chad GPT and bring Chad into the classroom. So you could have an exercise in your classroom, ask the students to get into groups, talk about an issue, and then up on the whiteboard, you start getting their input, you start listing it. And then once you’re done, you can feed Chad GPT the same prompt and get the list from Chad GPT, and then compare it to what you’ve already collected from the students, what their input has been. And from there, you can do a comparison, like “We talked about that, and that, and that, oh, this is a new one. What do you think about this?” And so you can extend the conversation by what Chad GPT has provided? …and there I go, Chad, I’ll be hooked on that for a while. But you can extend the conversation with this or if students have questions that are coming up in class, you can field that to the rest of the class, get input and then say “Okay, let’s also ask Chad, see what Chad has to say about that particular topic?” Those grouping exercise we typically do the think-pair-share exercise, well part of that is each student gets to get Chat in that group. So, each group you can have Chad come in where they have to discuss, they have to think about it first, write something down, pair, discuss it, then add ChatGPT into the mix, talk about it a little bit more, and then share with the rest of the class. Lots of different ways that you can bring this into the classroom, but I bring it right in as another student.

Rebecca: Think-pair-chat-share. [LAUGHTER]

Stan: Yep. And that’s that mine that actually somebody was clever enough, they found that. I just happen to glom on to it. But yeah, definitely a great way of using it. It’s a new tool. We’re still figuring our way, but it’s not going away.

Rebecca: So whenever we introduce new technology into our classes, people are often concerned about assessment of student work using said technologies. So what suggestions do you have to alleviate faculty worry about assessing student work in the age of ChatGPT?

Stan: Well, students have been cheating since the beginning of time. That’s just human nature. Going back to why are they cheating in the first place? In most cases, they just got too much going on, and it becomes a time issue. They’re finding the quickest way to get things done. So ensuring that assignments are authentic, that they’re real, they mean something to a student ,is certainly very important in building this. The more it’s personally tied to the student, the harder it is for ChatGPT to tap into that. ChatGPT is not connected to the internet yet. So having current information, that’s always a consideration. But I would go back to the transparent assignment design, and part of the transparent assignment design that is often overlooked is the why. Why are we doing this. If you use ChatGPT to do this, this is what you’re not going to get from the assignment. So, when building those assignments, I recommend being very explicit that yes, you can use ChatGPT to work on this assignment, or no, you cannot, but here’s why. Here’s what I’m hoping that you get out of this. Why this assignment’s important. Because otherwise, it just doesn’t matter. And then when I have an employee that just simply hits the button and gives me something from ChatGPT, I’m going to ask, “Why do I need you as an employee? Because I could do that. Where’s the human element? …bringing that human element into it, why is thisimportant?” What learning shortcut or shortcutting you’re learning, if you just rely on the tool and not grasp what the essence of this particular assignment is. But I think it goes back to writing better assignments… at least that’s my two cents on it.

Rebecca: Thankfully, we have ChatGPT for that.

John: For faculty who are concerned about these issues of academic integrity, certainly creating authentic assignments and connecting to individual students and their goals and objectives could be really effective. But it’s not clear that that will work as well when you’re dealing with, say, a large gen-ed class, for example. Are there any other suggestions you might have in getting past this?

Rebecca: John? Are you asking for a friend? [LAUGHTER]

John: [LAUGHTER] Well, I’m gonna have about 250 students in class where I had shifted all of the assessment outside of the classroom. And I am going to bring some back into the classroom in terms of a midterm and final but they’re only 10 and 15% of their grade, so much of the assessment is still going to be done online. And I am concerned about students bypassing learning and using this, because it can do pretty well on the types of questions that we often ask in introductory classes in many disciplines.

Stan: That’s a hard question, because there’s certainly tools out there that can identify where it suspects it’s been written by AI. ChatGPT is original text so you’re not dealing with plagiarism, necessarily, but you’re dealing with, it’s not yours, it’s not human written. There are tools out there, but they’re not necessarily 100% reliable. Originality.AI is a tool that I use, which is quite good, but it tends to skew, everything is written AI. TurnItIn, they’ve incorporated technologies into being able to identify AI, but it’s not reliable. This honestly comes down to really an ethics issue, that folks who do this feel comfortable in bypassing the system for the end game, which is to get a diploma. But then they go to the job and they can’t do the job. And a recent article that I read in The Wall Street Journal was a lot of concern about employees not having the skill sets that they have, and how to convince students of this, that “why are you here? What’s the whole purpose of doing this? I’m here to guide you based on my life experience on how to be successful in this particular discipline, and you don’t care about that.” That’s a hard problem to fix. So I don’t have a good answer for that. I’m always on the fence on that because it’s hurting the integrity of the institution that students can bypass, but it’s harder. Peer review is another tool, you know, to have them go assess it. They seem to be a lot harder [LAUGHTER] on each other. Yes, this is a tough one. I don’t have a good answer. Sorry.

John: I had to try again, [LAUGHTER] because I still don’t have very good answers either. But certainly, there’s a lot of things you can do. I’m using clickers.I’m having them do some small group work in class and submitting responses. And that’s still a little bit hard to use ChatGPT for just because of the the timing, but it was convenient to be able to let students work on things outside although Chegg and other places had made most of those solutions to those questions visible pretty much within hours after new sets of questions have been released. So, this perhaps just continues that trend of making online assessment tools in large classes more problematic.

Stan: Well, I mean, one of the strategies that I recommend is master quizzing. So master quizzing is building quiz that are 1000s of questions large and randomly drawn from it. And they get credit when they ace it. And then the next week, they have another one, but it’s also cumulative. So they get previous questions too. And you have to ace it to get credit. Sorry, that’s how it is, cheat all you want, but it’ll get old after a while.

John: And that is how my course is set up. And they are allowed multiple attempts at all those quizzes, and they are random drawings. And there’s some spaced practice built in too, so it’s drawing on earlier questions randomly, but, but again, pretty much as soon as you create those problems, they were very quickly showing up in the online tools in Chegg and similar places. Now, they can be answered pretty well, using ChatGPT and other similar tools. It’s an issue that we’ll have to address, and some of it is an ethics issue. And some of it is again, reminding students that they are here to develop skills, and if they don’t develop the skills, their degree is not going to be very valuable. I

Rebecca: Wonder if putting some of those like Honor Code ethics prompts at the beginning or end of blank bigger assessments would [LAUGHTER] prime their pump or just cause more ChatGPT to be used. [LAUGHTER]

John: That’s been a bit of an issue because the authors of those studies have been accused of faking the data. And those studies have not been replicated. In fact, someone was suspended at Harvard, recently, and is now engaged in a lawsuit about that very issue. So the original research that was published about having people put their names on things before beginning a test hasn’t held up very well. And the data seems to have been… at least some of it seems to have been… manipulated or fabricated. [LAUGHTER] So right now, ChatGPT allows you to do a lot of things, but they’ve been adding more and more features all the time. There’s more integrations, it’s now integrated into Bing on any platform that will run Bing. And it’s amazing how well it works, but the improvements are coming along really rapidly. Where do you see this as going?

Stan: November 2022, was ChatGPT built on GPT3 , we’re now into four. And this is only half a year later, basically, that we got into four. I mean, it’s everywhere. For example, in selling books, one of the things that you want to do is try to sell more books. So I went back to Amazon, pulled out all the reviews that I had, sent them into ChatGPT and said “Tell me what the top five issues are.” In seconds it told me it just assessed it where this would take large amount of time for me to do this and it just did it nice and neatly. Everything is going to have AI into it. Grammarly AI is being built into it. All the Microsoft products are going to have AI built in. We’re not getting away from it. We have to learn how to use this in our professions, in our disciplines. With ChatGPT4, it was said somebody had drawn a wire diagram of a website buttons and mastered and text and took a picture of it, gave it to ChatGPT4 and it wrote the code for that website. It’s gonna be exciting. Buckle up, and we had consternation about January, we’re gonna have a lot more coming up. It’s just part of what we do. We have to figure out how to stay relevant, because this is so disruptive. In the long line of technologies that has come out, this is really disruptive. We can’t fight against it, we have to figure out how to do it appropriately, how to use this tool.

Rebecca: The idea of really having to learn the tool resonates with me because this is something that we’ve talked about in my discipline for a long time, which is design. But if you don’t really learn how to use the tools well and understand how the tools work, then the tools kind of control what you do versus you controlling what you’re creating and developing. And this is really just another one of those kinds of tools.

Stan: Well, even in the design world, I’ve gone to Shutterstock. And there is something that allows you to create a design with AI. So the benefit for a designer is they have a certain language, tone, and texture. Their language is vast, and for them to craft a prompt would look entirely different from me, a snowman sticks for arms, it’d be entirely different. But getting the aspect ratio of 16 x 9, everything that you craft into this prompt and feed it in, somebody who does design and knows the language would get something then a mere mortal like me putting that information in. So for somebody who’s in economics, you have a whole language about economics. Somebody who is trying to craft a prompt related to that discipline has to know the foundationals, the language of that discipline, to even get close to being correct in what they’re gonna get back. And students have to understand this, they cannot bypass their learning because they will not have the language to use the tool effectively.

John: And emphasizing to students the role that these tools will be playing in their future careers, might remind them of the importance of mastering the craft in a way that allows them to do more than AI tools can. And at some point, though, I do wonder [LAUGHTER], at what point AI tools will be able to replace a non trivial share of our labor force.

Stan: It’ll affect the white collar force a lot quicker. And I look at it… a nice analogy for the AI was in the Marvel, you have Iron Man, Tony Stark. And it is the mashup of the human and the machine. He’s using this to allow himself to get further and faster in his design, and to do things that we hadn’t thought about before. And I see this tool, being able to do this, that we’re bringing so much information and data to this, it’s mind boggling that suddenly you see a spark of inspiration that you couldn’t get there by yourself without a lot of labor, and suddenly it’s there. And you can take that and run with it. For me. It’s tremendously exciting.

Rebecca: So we always wrap up by asking, what’s next?

Stan: Great question. Right now, I’m getting edits back from my editor for my next book, it’s Strategies for Success: Scaling your Impact as Solo Instructional Technologists and Designers. I’ve been doing this for about a quarter century and mostly as someone by myself, helping small colleges on how to do this, how do I keep my head above water and try to provide the best support possible? So sharing what I think I know .

Rebecca: Sounds like another great resource.

John: Well, thank you, Stan. It’s always great talking to you, and it’s good seeing you again.

Stan: Yeah, absolutely. And also, free book… I’mgonna give a 100, first 100 listeners, but I can go more. Yeah, so there’s a link it’s bit.ly/teaforteachinggpt . And so it’s in that set of show notes to share, but the first 100 gets a free copy of the book.

John: Thank you.

Rebecca: Thank you.

John: We’ll stop the recording. And, and we’ll put that in the show notes.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

296. ChatGPT Chat

Since its arrival in late November 2022, ChatGPT has been a popular topic of discussion in academic circles. In this episode, Betsy Barre joins us to discuss some of the ways in which generative AI tools such as ChatGPT can benefit faculty and students as well as some strategies that can be used to mitigate academic integrity concerns. Betsy is the Executive Director of the Center for Advancement of Teaching at Wake Forest University. In 2017 she won, with Justin Esarey, the Professional and Organizational Development Network in Higher Education’s Innovation Award for their Course Workload Estimator.

Show Notes

Transcript

John: Since its arrival in late November 2022, ChatGPT has been a popular topic of discussion in academic circles. In this episode, we discuss some of the ways in which generative AI tools such as ChatGPT can benefit faculty and students as well as some strategies that can be used to mitigate academic integrity concerns.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

Rebecca: Our guest today is Betsy Barre. Betsy is the Executive Director of the Center for Advancement of Teaching at Wake Forest University. In 2017 she won, with Justin Esarey, the Professional and Organizational Development Network in Higher Education’s Innovation Award for their Course Workload Estimator. Welcome back, Betsy.

Betsy: Thanks. It’s so good to be back.

John: We’re really happy to talk to you again. Today’s teas are… Betsy, are you drinking tea?

Betsy: Yeah, actually, I was really excited. I’ve Chai spice tea. I was really excited when y’all invited me back because I’ve actually made a decision to stop drinking coffee as much as I have in the past. So I thought I’d be into all these exotic teas by the time that we recorded this, but nope, just a boring chai tea for today. But maybe next time when I come back, I’ll have some interesting teas for you.

Rebecca: We’ll make sure we ask you to level up next time, Betsy.

Betsy: Great.

Rebecca: I have a cup of cacao tea with cinnamon.

Betsy: Nice.

John: And I have a pineapple ginger green tea today.

Betsy: You all are inspiring me. I love it.

Rebecca: Did you say pineapple, John?

John: Pineapple.

Rebecca: Is this a new one?

John: No, it’s been in the CELT office for a while. It’s a new can of it, It’s a Republic of Tea tea.

Rebecca: I feel like it’s not one of your usual choices.

John: You said that the last time I had this. [LAUGHTER]

Rebecca: Yes, I just don’t associate this tea with you.

Betsy: You have a block.

John: I think I’ve only had it on the podcast two or three times.

Rebecca: Just a couple. [LAUGHTER] I just don’t remember. Clearly. Okay. [LAUGHTER] We’ll move on. We’ve invited you back, Betsy to talk about ChatGPT. We know you’ve been writing about it, you’ve been speaking about it, and everyone’s concerned about it. [LAUGHTER] But maybe we can start first by talking about ways that faculty might use tools such as this one to be productive in our work.

Betsy: When I discovered ChatGPT, the way that I discovered it, which was back in December, was that I had a colleague who sent a screenshot of asking them to draft a syllabus. And so my first encounter was actually with ChatGPT, doing something that would help teachers. It’s also the case that I’m a teaching center director, so, of course, I’m thinking of these things, but it has certainly shaped what was possible. And it blew my mind what it was capable of doing in a great degree of detail, actually. And then about a month later, I was working on a curriculum project where I was having to draft learning outcomes. And that’s a task that we do in the teaching center a lot, and always getting it precisely right and not really sure what’s the different ways that we can phrase this so that it’s actually measurable. And so I just started playing around with what was its capabilities in terms of learning outcomes, and I saw that it was actually pretty impressive and generative there. And then back then, when there was only GPT3, I kept trying to see if it could do curriculum maps for us. And I really had to force it and really think hard about my prompts to get it to actually map outcomes to courses and curriculum. But then when GPT4 came out, I tried it again. And I thought I was going to have to do it step by step. But this time, I tried with a philosophy curriculum, and I said, I want 15 courses, I want them to have three to five outcomes each, students need to take a certain number of courses, we want them to have each outcome three to five times and just sort of gave broad guidance. And it gave me a full curriculum as well as a map. And it was actually a very good philosophy curriculum. So it came up with the outcomes. It came up with the courses, I was floored, and it was my first request. So there are many other things I think we can use ChatGPT for in terms of our teaching, but the curriculum was really, I think, one of the most complex things that I’ve seen it do.

John: I saw you do that. And so I experimented to have it develop a whole major program, with course descriptions and learning outcomes for the program, as well as for each individual course. And it did a remarkably good job of it.

Betsy: Yeah, I was amazed because I didn’t really give it much of a prompt. And it had within the philosophy major, like comparative philosophy, issues of diversity, environmental philosophy. So it wasn’t the typical things that you would expect in a philosophy major, it was actually quite innovative in some ways. And I appreciated that. From the perspective of a teaching center consulting with administrators and faculty on curriculum, one of the things we often see is that the little blurbs in our handbooks or bulletins for students to see the descriptions of the courses, they’re about 150 words. And often they’re very much teacher centered. So here’s the topic of the course: in this course, you will study this, this, and this. And one of the biggest challenges is how do we turn those into outcomes. And so I actually tried to do that too, is I went through our bulletin and just threw in those 150-word descriptions of the topics, and had them develop three to five outcomes that were measurable. And it did pretty remarkably. And so I think that could be a useful starting place. Again, with a lot of this stuff, you don’t want to just take it as is, but a useful starting place to help our faculty and our curriculum committees brainstorm. And in about a week, we are going to do a course design institute at Wake Forest. We do it every summer, and I’m really eager to have my colleague Kristi Verbeke, and my other colleague, Anita McCauley, experiment with using ChatGPT as part of the process in the course design institute to see if it helps them speed up or get more ideas as they’re generating various aspects of design of their course, not just outcomes, but all the way down the line of the steps of course design.

Rebecca: Sometimes it can be really hard to get started, but as soon as you have a start, you know what you want.

Betsy: That’s right. And one response you might imagine to the fact that ChatGPT can draft learning outcomes is you might imagine someone saying, “Well, that’s a clear sign that it’s pretty easy and meaningless tasks to think to be able to draft the learning outcomes.” But what I have found is that when I, not just my colleagues, but when I have a really concrete learning outcome that’s measurable, it helps me design the course better, like it’s just so much easier to think immediately of an assignment. But when it’s vague, and it’s kind of like, I don’t really have it really clear, in my mind, it’s so much harder to do all the other steps. And so even if we think it’s a somewhat trivial task, having ChatGPT help our colleagues come up with really clear learning outcomes will help speed up everything else, at least that’s my hypothesis and we’re gonna see how that goes this summer.

Rebecca: We’ve played around a little bit of using like those course descriptions that might appear in a catalog and turning it into marketing language, which is very different.

Betsy: Oh, that’s so interesting. And has it worked well?

Rebecca: Yeah, I think it’s definitely a starting place to move it to a different kind of language.

Betsy: So I’m teaching a first-year seminar in the spring, and I’m an ethicist, so I teach a course on sexual ethics. And the last time I taught it, I had a pretty conservative title. And it was interesting, I only had women in the class, cis women in the class, there were no men that had signed up, which had not been the case before when I’ve taught that class. So I actually used it to say, I want to attract a diverse group of 18 year olds, or 20 year olds, first-year students to this class, what are some titles or some quick summaries that I might use, and it was really fun to see some of the ideas it gave me. I ended up mashing a bunch together, again, taking pieces of it as an expert and pulling it together. But it certainly got me thinking in a way that would have taken me much longer if I didn’t have that help.

John: I used ChatGPT to create an ad for the Tea for Teaching podcast just to see how it would work and I posted it on Facebook, and I got quite a few responses from people saying, “I use this all the time in my work.”

Betsy: Yeah.

John: This is a tool that’s out there and that came up really quickly, but it’s still a really early stage of this. And a lot of faculty are really concerned about issues of academic integrity, and so forth. And we can talk a little bit about those. But we have to prepare students for the world in which they’re living. And the world in which they’ll be living is one where AI tools are going to be ubiquitous. So you do a lot of work with ethics. How can we help students learn how to ethically use ChatGPT, in college and beyond?

Betsy: Yeah, I think it’s actually a fabulous question. And one of the things I’ve often said, a lot of folks come to me to talk about ChatGPT in terms of teaching and learning. And of course, I have lots of thoughts about that. But I actually have been particularly consumed with reading about the much bigger questions about what AI means for humanity, to be quite frank. There are really dramatic and important questions that we need to think about. And in fact, I think sometimes what I have seen is sometimes people will think that that’s just hype: “Oh, that AI might take over the world, or that it might have these dramatic effects.” But if you actually talk to people who are experts in artificial intelligence, they’re really worried. And when the experts are really worried, it makes me very worried. So when we think about preparing our students, on the one hand, you can think about it as preparing them to use a tool that they need to use for their career, kind of like, “I need to teach them how to use Excel, or I need to teach them how to do basic productivity tools.” And that’s really important. Don’t get me wrong. In fact, like a lot of students don’t learn how to use Excel, and they don’t learn how to use these productivity tools. I have colleagues that I’m teaching these things to where I’m like, “Oh, you didn’t realize you could use this, it makes your life a lot easier.” But I think the bigger issues are preparing them to think about the potential implications to really understand what the tool is doing and what that means for how we understand human intelligence, how we think about consciousness. I mean, what it means for whether we want to have a world in which there are artificial intelligences that we might have moral obligations to. I mean, all sorts of huge, huge questions. Now, I don’t think all teachers need to address those issues. Just like all teachers probably don’t need to teach the technical stuff. But I certainly think when we are thinking about curriculum, it’s essential that our institutions think about helping our students think critically and philosophically about what artificial intelligence means. And I think perhaps my guess is like some of our students or like, our faculty haven’t played around with it a lot or kind of like, “it’s just another thing like Grammarly, it’s not that big of a deal.” But we have found at Wake Forest that when we invite experts in, so linguists, or computer scientists, or machine learning folks, or ethicists to come and talk about these tools and how they really work. Folks have their eyes opened, and then realize, “Oh, this is a bigger deal than we thought it was and we might need to think about regulation [LAUGHTER] and what comes next.” So policy issues, not just ethics issues as well. So we don’t have an answer except for the fact that we need to be talking about it. I have some ideas myself about what I think regulation should be, et cetera. But I do think our students shouldn’t just be seeing it as a tool to make their lives easier, although it is, it also is important for them to think through the implications for society. And then I guess also as another ethical piece, obviously, is that, as we address the issue of academic honesty, helping our students think about their reasons for choosing to take liberties that they were not authorized to do and thinking about their own character. And that’s going to have to be an approach that is somewhat different than just punishment to help our students behave in ways that we wish them to.

Rebecca: I know that my colleagues and I have had some really interesting conversations around AI related to visual culture and creating visual items, because a lot of the libraries of images are copyright protected. And what does it mean when you’re taking something that has these legal protections and mash them up into something new? And then whose property is it? So they lead to really interesting conversations, and so you start thinking about it as a maker and your work being a part of like a library of something, and then also, when you’re using work that’s created, what does that mean? So one of the things that we’ve been talking about is there’s policy at all levels, like what’s our departmental policy around these things? And what kind of syllabus statements or things might we do to be consistent across courses?

Betsy: Yeah, and I think one of the most important things, and it’s gonna take some time, is for all of us to get clear on what we think our policies or our positions are going to be about what is appropriate and what’s not appropriate. And then once we do, to really communicate that to students, because I think they’re in a place right now where it’s all over the map. And many instructors aren’t actually sharing that with them. And so I think that gets us in a fuzzy situation where students assume “Well, if this professor said this, then this professor would be okay with it.” And often it’s very different. And so how do we at least have a conversation at the beginning of the semester with our students about what we think? And I actually think, as you point out, Rebecca, it’s a learning opportunity too for students to co-construct some of those positions. So let’s talk about the reasons why we might want to not just say it’s a free for all. We can talk about the value of art and the value of our work as artists, and what does it mean to just use somebody else’s work without acknowledging it? And maybe there are ways to acknowledge it. And unfortunately, one of the challenges of these image generators is that we don’t necessarily know what it’s drawing on. And so that’s one interesting regulation is: could there be a way? I mean, I don’t know. It’s tough. So one of the challenges with the science of this stuff is that often those who create it don’t know how it’s working. [LAUGHTER] And they will tell you that, that it’s a black box. And so to be able to get in there and say, “Well, I will reveal it to you.” I think sometimes folks assume they’re not telling us because they want it to be proprietary. But often, they’re not telling us because they don’t actually know how the algorithm was developed or is doing its work. And so that’s a really tricky situation. But when we did a number of series of workshops for our faculty this semester, and one of them was we brought in some experts, and we had some copyright experts and some lawyers that came in and talked about this, and really fascinating questions about copyright in our work that, again, is a great opportunity for students to learn that question in a real live way that they see happening.

John: Going back to the whole issue of copyright, in terms of human history, that whole concept is relatively new. And when artists created new work, they started by copying the work of others, and they added their own twist. And in general, in pretty much all academic disciplines, the work that people are doing now is built on the work that others have done before, this. Is what ChatGPT is doing, in part, just the same type of thing that humans were doing, except instead of spending years learning how to do this, and building on it slowly over centuries, it’s doing it in a few milliseconds.

Betsy: Yeah, and I’m not an expert on arts, and so I’m sure there are lots of experts, and Rebecca, you can jump in here as well. But I would say that there are certainly questions about: Is it harming? That’s the question, often with ethics, we’re asking. Is itt harming anyone to engage in this practice. And even if we don’t know we’re using somebody else’s work, we often are. Our ideas build on one another, etc. But of course, in a capitalist society where artists make money based on their work, there become new questions about how do I preserve my livelihood in this particular context? Now, again, if there was a different context in which we supported our artists, so that they didn’t need to make money off of their work, because we gave them a basic income, there may be a different question involved there. And so actually, I mean, I think the economic questions, so I’m tying you both together here. So economics and art, this is great. The economic questions are really interesting about what does this mean for the future of labor? And how do we think about work in the future? I mean, granted, now, it seems like it’s not going to be immediate, but there might be long-term implications for all of us that we need to rethink as well. So I don’t know, Rebecca, you have thoughts about that?

Rebecca: Yeah. I mean, I think it’s interesting. I mean, John’s pointing really to the printing press is when copyright came about, when there it was easier and less time consuming to make copies of things. And then in 1998 copyright law changed again, because of the ability of making digital files, copies, so easy.

Betsy: Napster. [LAUGHTER]

Rebecca: Yeah, copyright law hasn’t kept up with technology over time. So there’s constantly these conversations about technology and creative work and what is it mean? I come from computer art. So generative art is a thing that we do, and that’s algorithm-based and you would argue that the machine is collaborating, in some ways, you write the algorithm. So I think there’s a trajectory of this has been happening for a long time. But it does raise a lot of interesting questions. And I think it’s really important for our students to grapple with, and really critically think about, and for us to critically think about together. In some ways, it’s nice because it gives us something to have a good constructive conversation around and really sort through it together.

John: And then maybe a less positive note, in terms of the economics behind this, there have been a lot of stories of people taking two or three jobs on and using chatGPT, to do two or three times as much work as they did before. And one of the issues I’ve addressed with my students in my labor economics class is, if we have these tools that can do the work that college graduates used to do, will there still be a demand for college graduates to do these tasks. Most technological change in the past ended up replacing less skilled workers, and provided a really nice return to those who had college degrees. But this type of innovation might very well be hitting a little bit more heavily on college graduates than most previous innovations.

Betsy: Yeah, it’s hard to actually talk about this, because I feel like every week it gets better and better. And so I could say, “Well, currently, here’s the set of skills, if we’re an expert, we can use it to sort of level up a bit.” So as I shared with just the curriculum mapping, I’m able to ask it things, and then because I’m an expert, I’m able to do things with it and ask appropriate prompts that push it right in the direction I want it to go and then I produce this wonderful outcome, ehereas sometimes I tried just to show my faculty to put in questions from like physics or something and I couldn’t really assess whether it was a appropriate answer or not, or how I had to push it. And so there’s part of me that thinks that there will be roles for expertise. But then again, how good will it get? Who knows? Will it eventually out compete us? …which is somewhat of a worry… but I do think that, at least currently, there’s still a role for the expertise to play a role. But you’re right, it’s going to just make it more efficient so that we can do more. And then the question is, if we’re doing more will we need fewer workers? Or will we just be more productive? All sorts of interesting questions there. I will say just a funny little story about this point about computer art and economics is our office is called the Center for the Advancement of Teaching. And our acronym is CAT. [LAUGHTER] So we make lots of jokes about that. And we have a serious logo. But all along we’ve been thinking like, we should have a fun, funny cat logo, like with an actual cat, we just haven’t had the money for it. We have this wonderful designer we work with, she’s amazing, we got a quote, we’re going to do it if we have money left over at the end of the year, but I was just playing around with Midjourney and like, what can it do for me? …and I mean, it’s not as good as hers will be, I don’t think, but it was pretty remarkable, especially since this is just a fun logo, it’s not like our serious logo, that I could just use it instead of paying somebody to do it. So this is the real sort of challenge is that it’s really maybe just the most advanced things that we’re still going to rely on experts for but maybe some of the basic stuff that we would have paid, we’re no longer going to do and what does that mean, economically?

Rebecca: There are some existing sources prior to AI that were like people who didn’t have degrees or didn’t have a background in design, who would whip up something for five bucks. [LAUGHTER] And sometimes it looks like it was whipped up for five bucks.

Betsy: [LAUGHTER] You probably would think that about my Midjourney examples. I’m sure Rebecca, I’m sure. Yeah. It’s so funny. So, this is the other thing too, is the students will eventually get better at this, like googling prompts. I went to this website that was like, “Here are these professional designers who design logos, ask it to do it in that person’s style?” Or “Here’s some language that you can use for the prompt like: vector, flat,” like, well, this sort of thing, or a mascot logo, which I didn’t even know that was a thing. But I guess if I want a cat logo, it’s a mascot logo, learning those things, which I never would have prompted, it actually helped me get something that was a little bit better. But it is fascinating. Yeah. And I think that’s true, in my experience with the tool in general is that the more you use it, the more you learn what it’s capable of. And I do think that a lot of our faculty have not really spent a lot of time experimenting with it for a variety of reasons. They’re busy, et cetera. But I often encourage them to really spend as much time as possible with it to really understand what it’s capable of doing. I was sharing with John, before we started this podcast, that plugins are now possible with ChatGPT. And the plugins just take it to a whole next level beyond even GPT4. And I’m still starting to play around with that. And I think it’s just something that, again, faculty need to be prepared for, because right now they’re saying, “Oh, it can’t cite things, or it can’t search the web.” Well, now it can. And what do we do about that? How do we keep up with it if we aren’t paying attention to it?

Rebecca: I think one of the things that you said earlier and alluded to when you were talking about the logo is needing expert language and expert concepts to be able to curate the prompts. So if students want to use tools like this in a productive way, they then also have to have a certain level of expertise, presumably, to do a good job. If we want to encourage students to be productive and use a productive tool in a productive way, what can we do to coach students? Do we want to coach students in this way?

Betsy: The question about whether we want to coach students is a really interesting one. There are folks, I think, who are anxious that if you teach them how to use it, they’ll use it in inappropriate ways. And my sort of response to that is we’re gonna have to address their desire to violate norms in a different way. That’s a different issue. That’s an issue of character. It’s an issue of ethics, because I think they are likely to do it anyway. Now it’s true, if they don’t really know how to do it, we might find it easier to detect it. But I’m guessing that, in a year, it’ll be harder and harder to detect it, even if they don’t know how to do it. But as of right now, I would say I think it is useful to teach them. I wasn’t teaching the spring, but I am teaching in the fall. And I’m really excited to think about. I’m not going to totally redesign my course, some people have done that, I’m not going to do radical changes, but just to engage in the conversation with them in the ways that it can be used. And I think some of the most important, honestly, are using it to explain material that they didn’t understand or using it to interpret my prompts if they’re weird, or my expectations. So helping the students use it to help them with their learning. So giving me feedback on my work, and I have a whole list of things that I would recommend, which is somewhat different than what we immediately think about when we think about students using ChatGPT. We think about them using it to write their papers, or to start, brainstorm, or give an outline. And all of those things might be great. But I actually think, as a person who’s interested in pedagogy, and particularly in student learning, there are only so many hours in the day, I have so many students, I can’t be with them one on one for 15 hours a week. But if there’s a way in which they can have like a tutor, who’s there with them to say, here’s what I think might be the explanation of that thing you didn’t understand, or let me help you interpret this paragraph and put it in the words for a sixth grader, or I’ll give you an analogy related to sports, if that’s what you know. [LAUGHTER] All of those things are amazing opportunities for our students to accelerate their learning. And that’s what we want. So it is true that these tools can be a threat to learning if students are just using them to write their papers in a literally copy and paste kind of way. But I also think there’s real opportunity to help them accelerate their learning. And again, you have to be careful, because it’s not perfect. And that’s your point about expertise. But I think, frankly, sometimes the advice they get from their friends, or if they go to the internet and Google it or YouTube, they’re not getting great advice either. So it doesn’t have to be perfect to be better than what they’re currently doing, is I guess what I would say? So I think that’s important. And then we could talk if you wanted to about how they might use it as a writing tool. I think it’s trickier there, of how we could ultimately and I’m sure you’ve heard this before in the previous ChatGPT podcast you did, it ultimately depends on your learning goals. What are your goals for your course, how you want to use it, but I do think there are certainly legitimate ways in which we can help students use it to help them learn more.

John: And just following up a little bit on that. I’ve heard of a number of faculty who are encouraging students to use it to create tutorials on specific topics where they may have a weaker background. And that’s certainly a very good potential use of this.

Betsy: And I even have experimented with like, “Okay, so give me feedback on this and then give me a learning plan,” like, “give me an improvement plan,” like, “what should I do? What steps should I take to get better at this skill?” …and it’ll actually give you pretty good plans. And it also can help them with time management, you know, “I have this many things I need to do help me prioritize what I should work on next.” And that’s good for us, [LAUGHTER], but it’s also really good for our students who really struggle with time management, I think. So I really do think there are a number of things that students can use it for, that I would feel comfortable with, but I also think it’s a really useful exercise for everyone listening or for any instructor to think through the possibilities. And you may decide these things are okay, these things aren’t okay, and it may differ for each class. But that is really important to do before you can actually communicate to your students what is and is not okay. And if you want them to actually do what you ask them to do, you have to have good reasons, I think. So you can’t just give them a rule, you should justify that rule, like with kids a little bit, you got to say why you think it’s the case to hopefully bring them along of why they wouldn’t want to just use the tool straightforwardly.

John: One thing I’ve used with some students, especially when I’ve talked to him about some of their uses of ChatGPT is, if all they’re learning in the course is how to type a prompt into ChatGPT and copy and paste that in, what types of skills are they acquiring there that’s going to be useful when they leave, because they could be replaced by anyone typing in those prompts.

Betsy: Right? What makes them unique. So one frame for this is the things we’re teaching students in school are useful for them. We want them to learn so that they can be productive in the market. That’s one way we often frame the work that we’re doing. But I think this gives us an opportunity to open it up a little bit wider where we think about the purposes of education beyond just what is going to be useful in the market. And so I sometimes will use the example of pottery… we’re coming back to art… is that I took a pottery class, after COVID, but it was like the first thing I wanted to do after we were back in person and so I took a wheel throwing class, and I was absolutely terrible. But the idea that I would go to Target and just cheat by bringing in something from Target that was made by a machine. No, there’s a reason I’m doing it. I want to actually learn the craft and the craft has meaning in and of itself, apart from the fact that yes, I could get a much better bowl [LAUGHTER] from Target than anything I will be able to create, but I’m really glad I’m doing it. And I think that’s really what’s gonna start happening is we’re going to start to see that there’s actually intrinsic value to some of these tasks, apart from their value for the, you know, am I gonna make more money later, etc, that we actually think that learning and thinking and the creativity that comes with producing is a value in itself. And that’s going to take a while to turn our students in that direction again, because there’s so market driven right now. But if things start changing in the market, and there are fewer and fewer jobs, they may be open to that conversation.

John: And maybe with the growth of alternative grading systems that try to shift the focus away from extrinsic rewards to intrinsic rewards, this could be quite complimentary.

Betsy: Yeah, and I know that we don’t want to talk all about academic honesty, but it is a real question. And so I don’t want to dismiss the faculty who are anxious about this. I was sharing before the podcast was recorded as well that the news about Chegg losing so much money in the past few months, was a real indicator to me that perhaps my optimism [LAUGHTER] about students not using it was ill placed, that in fact, that’s pretty good indirect evidence that a lot of students are now using ChatGPT to do what Chegg used to do for them. And so it’s not good for us, but it’s not good for the students. And so we do need to think about it. But I do think there’s sort of two broad approaches. One is like the punishment and enforcement approach, and then the other is prevention. And I think focusing on prevention is really where we need to go. And so referencing focusing on the intrinsic value of the work, maybe pulling away from those high-stakes graded assessments is a way to think about motivational changes of how we prevent students from engaging in these and I sometimes will use the example again, of the pottery class, like the idea that I would be motivated to cheat in a pottery class is absurd, like, why would I cheat at that class, because I’m just doing it for my sake. Now, if I was doing it so that I could get more money or so I could get this grade so that I could get into something else I wanted, then I might be willing and tempted to cheat in the pottery class. We know that students cheat because of the grade, they don’t cheat just because they think that’s the fastest way to learn. [LAUGHTER] They know they’re not learning, but they’re like, “I need this grade, because I need this degree so that I can get this job.” And so really bringing them back, decreasing that external stuff, and taking them back to the value of learning may be the only way we’re really going to tackle this. Now, it’s easier said than done. We’re all in a system where grades matter, and students need to get degrees and so it’s a longer conversation. But I do think revisiting some of the literature on cheating, even before ChatGPT existed is going to be really valuable for all of us.

Rebecca: So you’ve talked about moving towards more low-stakes opportunities, and we’ve hinted towards alternative grading, what are some strategies that faculty can use to assess student learning? We’re concerned that we’re not able to see whether or not a student is learning if they’re using tools like that, those are the conversations that we’re having.

Betsy: Yeah, so there’s things you can do to hopefully try to prevent it. But those may not always work. So I have a lot of ideas for how to prevent it. You can give them extrinsic reasons for using the tool itself. Like for example, I mean, this is just a simple one, but let’s say you’re teaching a math class, and you have an in-person final, and you tell them, “you’re going to be preparing yourself for the in person final by doing the homework yourself.” So there’s a kind of extrinsic reward of the final that’s in person will hopefully motivate the students to do the practice problems themselves, because they need to actually learn the thing that will get them that reward. But I do think if they do it, so again, lots of motivational things to talk about. But if they do it, first of all, how do we know is a really interesting concern. And I think that one interesting point that I’ve raised when I’ve had some conversations with folks is that I think a lot of people think when we’re talking about we need accurate assessments of student learning. The first assumption is that what we’re talking about there is we need to have grades that are just, so when we pass them on to like jobs, or to future courses, that we have just grades. But I actually think there’s a real learning reason why we want accurate assessments, is that if I can’t accurately assess your skills, you’re not going to learn. I actually want to know where you are really struggling, so then I can adapt my teaching to better help you learn. And if it looks like you’re doing great, I’m moving on, I’m moving on, I’m not going to actually help you learn that thing. And so it’s really important for learning as well that we have really accurate assessments of their skills. And so if they are using it, so how do we detect it? Tough one, but I think that’s where multiple measures. So you might imagine you have some in-class things that are happening, you’re not just lecturing. So this is a good reason for active learning as well. Because you’re engaging your students in class, you actually hear them speak in class and explain things to you in class. And if they’re struggling there, and then all of a sudden, they have this beautifully written paper, I think that’s a useful comparison. It’s no guarantee that that’s the case because sometimes students need time to reflect, particularly English as a second language learners need time to build their arguments, etc, rather than just being on the fly in class, but it is interesting evidence and that there’s people talking about oral exams and other possibilities or at least having conferences with the students about their work. So it’s not an exam, but just like let’s meet to chat about this. Now, of course, if you’re teaching a huge class, that’s not possible and available to you, but those that are teaching smaller classes, it might be. So I think we’re gonna have to be creative. I have not found a silver bullet here, I have heard lots of great ideas of things that could be possible, but all of them have trade offs, all of them come with downsides. And this is kind of my mantra all the time, when I think about pedagogy issues is that we should not get too absolutist about this, all of us are going to make different choices. And they’re all gonna have different downsides, and they’re all pretty reasonable, because right now, there is no obvious solution of what we all should be doing. I think some may choose to do oral exams, some may choose to do in-person, others may choose to say, “I’m not gonna pay as much attention as some others are.” And all of those things I think are reasonable. They’re just different approaches. And we should keep paying attention and be open to changing our minds if it seems like it’s not working. But I don’t feel like it helps us to be in sort of like one strong camp or the other when we think about the issues of academic honesty, and ChatGPT. So again, I don’t have an answer. Just lots of questions for you. But did you find anything that was useful over the past semester for you?

Rebecca: We teach really different things so our approaches are going to be very different. In my classes, we’re doing creative work. And so historically, and we continue to do this, documenting your process is part of the project. And so we see a project evolve over time. And that maybe involves using the use of AI as part of an input during that time, but documenting that as something that makes that happen. And we do critiques, we show things in progress, and we talk about it, and there’s feedback that’s recorded at those moments. And then if we’re not responding to feedback, then we’re not growing. So we have some systematic ways of demonstrating some creative process there and having to discuss and determine decision making around design decisions or creative decisions, like “Why did you make that decision?” And if it’s just like a random choice, then let’s be intentional about it. And now you need to maybe rethink that choice and make it more intentional. So those kinds of authentic learning opportunities really do kind of push it in a direction where it’s a lot more difficult to use AI as the entire thing. [LAUGHTER] It might be a part of the process, but it wouldn’t be the final output.

Betsy: John, I want to let you respond too, but what you’ve done is that one of the things about that, because that is certainly like doing authentic learning and process-based stuff. As you put it, Rebecca, it’s more difficult. But that doesn’t mean, and this is important, some people will say is that like, ChatGPT will tell you it’s process too, or you can ask it to give me processes, etc. So I do think, actually, one of the things that I think is I appreciate about your example is there’s a lot going on in class, there’s a lot going on, and it’s harder for folks who are doing asynchronous online courses. But if there are ways in which we actually see the process, and that’s kind of the authentic too, is that we’re actually not assessing a product, we’re literally live with them watching the process, I think we might be more likely to get some accurate things. And then if we just said, “Okay, we want you to write about your process, we actually want to see the process as most important.” So John, what about you,

John: The classes I’m most worried about are my large class, which has up to 400 students in it, and an online class that’s on the same topic with generally 40 to 50 students in it. And there’s some challenges there. In the large class, one of the things I’ve done since the start of the pandemic, is to shift all the assessment to online activities. I used to have a midterm and a final that were cumulative, they weren’t a tremendously large portion of the grade, there were lots of low-stakes tests that they could do over and over again. But the validity of those I suspect is going to be a bit different now. Because ChatGPT can do quite well with multiple choice questions and short answer questions and even algorithmic questions. So I’m probably going to bring back at least a midterm and final in person in my large class, just for the reason you described, the motivational thing… that you can practice these things as much as you want to learn it, but you’re going to be tested on this. And the greater your ability to recall and apply these concepts, the better you’ll be able to do on these things. And I wish I didn’t have to do that, because there’s so much advantage of letting students do things over and over again until they master things. But I’ve looked at some of the times on some of the quizzes I used this time, and students were turning them in [LAUGHTER] much more quickly than would have been possible had they not been relying on some sort of assistance.

Rebecca: Well, John, they’re just learning it so much better.

Betsy: Yes, that’s right. That’s right. That’s right.

John: And a nice side effect is you no longer get any spelling or grammatical errors.

Betsy: Yeah, you can read it faster as well.

John: Yeah, it makes it easier. [LAUGHTER]

Betsy: Yeah, no, and I do think as much as I think that we should trust our students, and I don’t want to be overly alarmist. There’s a lot of evidence that our students are doing it and even our students who would prefer not to do it, I think are doing it because they perceive that all the other students are doing it. So this was the same problem in the pandemic with academic honesty is that you have some students who will never cheat for whatever reason, [LAUGHTER] a small number… you have some students who will always cheat, they’ll find ways, they’ll pay somebody, whatever… they’re gonna find ways, and then there’s just a whole bunch of students in the middle who If the context really matters, and if they assume that all the other students are doing it, it puts them at a disadvantage to not do it. And we shouldn’t put our heads in the sand or assume that all of our students are not doing it. We shouldn’t also assume that our students are these horrible people, because they’re doing it, we need to recognize they’re doing it, and how can we help them create the conditions where they would be motivated to not do it to get themselves in trouble. And I do think your point, John, about the 400 students, about teaching an async online course, and even Rebecca, some of your description of what you’re doing in class. One thing that occurs to me, and I don’t have any illusions that this is going to happen, but I do think what these push against is our traditional model of how higher education happens. So we assume for the longest time that it was a lecture that took place. So that’s why 400 didn’t matter versus 20, we also assume that most of the learning would take place outside of class, because you would just come to a lecture and then you would go read the book and learn and teach yourself, basically. It’s kind of this old school model of like, the professor is just there to give you information, you’re going to teach yourself before the exams. And I think I can imagine a world in which, if we really want to see process, we need to be with our students more than three hours a week, and we need fewer students in the course. But that would be such a radical change to the economic model of higher education. I can’t imagine how expensive that would be. But it is more similar to K through 12. And in some ways, I think K through 12 folks have more of an advantage because they’re with the students so much more that they can actually watch them. And homework is less important. One of the most important things I always tell my students and have for years is that most of your learning will take place outside of class, and to emphasize that to them. And I think maybe now that creates a challenge because we’re not with them. And so we can’t sort of see whether they’re doing what we want them to do. So we really have to lean into the intrinsic motivation pieces of what is it that motivates them to want to do well, but with 400 students, they don’t know you really well, John, so they don’t feel guilty about like I have this relationship with my professor. It is tough. And I guess I would say on this point about academic honesty, and maybe we don’t have to keep talking about academic honesty. But I’ve seen a lot of faculty feel really guilty about their approach to this on both sides, like either they’re too overly harsh, or they have ignored it too much. And they’re super anxious about whether they’ve taken the right approach to academic honesty. And I think the most important thing I would say to instructors is this is really hard. Don’t beat yourself up about it, like you’re trying your best. And none of us have a perfect system. If we did, we’d be able to sell that, and it would be great. [LAUGHTER] We don’t have a perfect system. Some of us are maybe leaning one direction, and others are leaning in the other direction. And it’s really demoralizing when our students cheat, and then that makes us depressed as well. But also know that you’re not the only one that all of us have students who cheat and that’s unfortunately, part of the educational process. So do your best. [LAUGHTER] Pay attention. But don’t worry if it’s not a perfect outcome.

John: One of the things I was struggling with just recently as I was grading exams is how do I evaluate the work which is clearly the student’s own work, versus one that probably wasn’t the student’s own work. I don’t want to penalize students for actually trying.

Betsy: I think some people say like, “Ah, let’s just ignore it. It’s not my job to be a cop.” But I think the reason we want to actually do that is an ethical reason, which is that I don’t want the students who actually put forth the effort to be disadvantaged. So I think that’s the right impulse, John, yeah, for sure.

John: One thing I hope that doesn’t happen, though, is that we move to proctored exams online, and that we don’t move to more use of high-stakes in-person exams and so forth, because that would go against so many other things that we’ve been arguing in terms of equity and inclusion and so forth.

Betsy: Yeah, and also, these detectors. So TurnItIn, which most folks are using now, because many schools have TurnItIn, attached to their LMS. And so even before schools have had an opportunity to make a choice it’s default turned on. So your institution has to choose collectively to turn off the AI detector in TurnItIn. And so I think that’s important, too, to think about, like, are we just going to move to these detectors as a way of punishing students? And are they reliable enough? We don’t know. So there are all sorts of good equity questions. Actually, there’s a paper that I read in preprints, about how the detectors seem to flag international students more than those who speak English as a native language, in part because their grammar is better. [LAUGHTER] And so it’s more formulaic, because we’re teaching them the formula of how to speak English. And we need to be mindful of like, how do we balance these things,our equity concerns… and really they both are equity concerns, as you point out, John… so there are equity concerns about more high-stakes testing and in-person testing, etc. But if we just ignore it, there are also equity concerns for the students who do the work versus the probably the privileged kids who are just going to be like, “Whatever, I’m going to pay my $20 a month for GPT4, and be able to get the better answer to be able to use it.” So how do we come up with some sort of solution that balances those and we probably won’t be able to have… at least I don’t think there’s one… where there aren’t some harms? And so it’s really about like, which harms are we willing to tolerate while we work for a better solution? And that’s the hard part of ethical reasoning is that there’s not a solution where no one is harmed usually in these dilemmas.

Rebecca: One interesting thing you said like spending more time with your students, which I have the luxury of doing in a studio art space, we spend twice as much time with students for the same credit hours…

BETSY. Interesting.

Rebecca: …which is valuable. We see process, we get to know our students really well, it’s a relational space [LAUGHTER] for forming relationships. And that really does change the dynamic. But that’s a really big time investment, while the cost of faculty in the spaces but also students of certain backgrounds, or if they have to work, it becomes much more difficult for them to take those kinds of classes, because they’re offered at particular times, and they’re longer, they’re harder to schedule their job around, and that kind of thing. So there’s equity issues in that space, too, as you’ve alluded to, about being in person.

Betsy: Yes, being in person, and then also the point about extra time. I was on talking about workload before. One interesting thing related to workload is that we know, from the research on student learning, that time on task increases learning. And so sometimes I think when we talk about making things accessible to students who are working 40, 50 hours a week, what we’re really doing is reducing the work that’s required of them, which is fine, if it’s just about getting the degree, which I think you can make interesting ethical policy arguments that that’s really important, because economically, it allows them to advance, etc. But if it’s about learning, we actually shouldn’t be reducing the amount of time they’re spending because they’re going to learn less. And so then there’s that tricky question of if we need students to spend 40 hours a week on school, what do we do? We have to compensate them so that they’re not having to work, there’s much larger policy issues at stake here beyond just like, well, we just got to expect them to buckle up and do it, they got to work 80 hours a week now instead. So these are all tough things. And in the context that we are in, where we don’t have those amazing, policy based governmental solutions in the United States, we have to make compromises. And we may say, “Well, maybe a little less work for the students who are working is the compromise we’re going to make for the greater good in this situation.” But recognizing it is true that they’re probably learning less if they’re not putting in the 40 hours. So but maybe with ChatGPT, we can speed it up, I don’t know. [LAUGHTER] So, interesting things about efficiency.

Rebecca: So, we always wrap up by asking “What’s next?”

Betsy: So I’ll speak to related to artificial intelligence a little bit. So I am, in July, going to the Council of Graduate School summer workshop, New Dean’s Institute. They’ve invited me out to share a little bit about ChatGPT, and I’m really excited to be thinking about what’s distinct about graduate education with respect to these tools, it kind of merges my interest in faculty use, as well as thinking about student use as well, for learning. In addition to that, we’ve been talking a lot about how to prepare for the fall, when the faculty come back. We were sort of… just like, COVID… sort of flying by the seat of our pants in the spring, like, here’s some things we’re gonna roll out for you, we’d like to be a little bit more intentional in the fall. And so as I’ve alluded to in this session, I really do think focusing on motivation for students is going to be really important instead of detection. And so we’re gonna do a reading group, we’re gonna go back to Jim Lang’s Cheating Lessons, which still holds up pretty well, actually. And we’re going to do a reading group of faculty on that. And then we’re also going to read the Grading for Growth book that’s just coming out in July, which we’re super excited about alternative grading. I’m teaching in the fall, as I said, so excited to actually try some of these things out and see if my ideas are actually practical [LAUGHTER] or not. And hopefully, I guess I just say, what’s next? I hope there’s some regulation. So we didn’t get into a lot of details about this, because we were focusing on teaching and learning. But I know Sam Altman, and Gary Marcus were before Congress. And I do hope that we actually see, unlike with social media, that we see some movement for some regulation about the development of these tools. So I think what we have now… fine, let’s figure out how to use them. But it’s really anxiety inducing to me that these tools will develop skills that nobody planned emergently like, it’ll just, “oh, now it has this new skill.” And the more that we build these tools out, we don’t actually know what we’re going to create. And I think [LAUGHTER] that is a little worrisome to me. And so I hope that what is next is more regulation on the tools.

John: We should note that we are recording this several weeks before it’s actually released. And we hope that at the time when this is released, [LAUGHTER] we haven’t reached that AI apocalypse that so many people have been worried about.

Betsy: That’s right. That’s good, John, thank you.

Rebecca: Well, thank you so much for joining us, Betsy. We always enjoy talking to you.

Betsy: Thanks for having me.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

274. ChatGPT

Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode, Robert Cummings and Marc Watkins join us to discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.

Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning.

Show Notes

Transcript

John: Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode we discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guests today are Robert Cummings and Marc Watkins. Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning. Welcome, Robert and Mark.

Robert: Thank you.

Marc: Thank you.

Rebecca: Today’s teas are:… Marc, are you drinking tea?

Marc: My hands are shaking from caffeine so much caffeine inside of me too. I started off today with some I think it’s Twinings Christmas spice, which is really popular around this house since I got that in my stocking. My wife is upset because I’m in a two bag per cup person. And she’s like saying you got to stop that, so she cuts me off around noon [LAUGHTER] and just to let me just sort of like dry out, for lack of a better word from caffeine withdrawal.

Rebecca: Well, it’s a great flavored tea. I like that one too.

John: It is.

Rebecca: I could see why you would double bag it.

Marc: I do love it.

Rebecca: How about you, Robert?

Robert: I’m drinking an English black tea. A replacement. Normally my tea is Barry’s tea, which is an Irish tea….

Rebecca: Yeah.

Marc: …but I’m out, so I had to go with the Tetley’s English black tea.

Rebecca: Oh, it’s never fun when you get to go to your second string. [LAUGHTER]

John: And I am drinking a ginger peach black tea from the Republic of Tea.

Rebecca: Oh, an old favorite, John.

John: It is.

Rebecca: I’m back to one of my new favorites, the Hunan Jig, which I can’t say with a straight face. [LAUGHTER]

John: We’ve invited you here today to discuss the ChatGPT. We’ve seen lots of tweets, blog posts, and podcasts in which you are both discuss this artificial intelligence writing application. Could you tell us a little bit about this tool, where it came from and what it does?

Marc: I guess I’ll go ahead and start, I am not a computer science person. I’m just a writing faculty member. But we did kind of get a little bit of a heads up about this in May when GPT3, which is the precursor to ChatGPT was made publicly available. It was at a private beta for about a year and a half when it was being developed, and then went to public in May. And I kind of logged in through some friends of mine social media to start checking out and seeing what was going on with it. Bob was really deep into AI with the SouthEast conference. You were at several AI conferences too during the summer as well, Bob. It is a text synthesizer, it’s based off of so much text just scraped from the internet and trained on 175 billion parameters. It’s just sort of shocking to think about the fact that this can now be accessed through your cell phone, if you want to do it on your actual smartphone, or a computer browser. But it is something that’s here. It’s something that functions fairly well, that you make things up sometimes. Sometimes it can be really very thoughtful, though, in it’s actual output. It’s very important to keep in mind, though, that AI is more like a marketing term in this case. There’s no thinking, there’s no reasoning behind it too. It can’t explain any of its choices. We use the term writing when we talk about it, but really what it is, is just text generating. When you think about writing, that’s the whole process of the thinking process and going through, being able to explain your choices and that sort of thing. So it’s a very, very big math engine, with a lot of processing power behind it.

Robert: I completely agree with everything Marc’s saying. I think about it is, and I believe it’s true, Marc, as far as we know, it’s an open AI, but it’s still using GPT3, so it’s really the same tool as Playground. I think it’s really interesting that when openAI shifted from their earlier iteration of this technology, which was Playground and there were some other spin offs from that as well, but that was basically a search format where you got an entry, and you would enter a piece of text and then you would get a response, that when they shifted it to chat, it seemed to really take it to the next level in terms of the attention that it was gathering. And I think it’s rhetorically significant to think about that, because the personalization, perhaps, the idea that you had an individual conversation partner, I think is exceptionally cute. The way that they have the text scroll in ChatGPT so as to make it look like the AI is “thinking” to maybe push this out when it’s immediately available. I think all of that reminds me a little bit of Eliza, which is one of the first sort of AI games that you could play where you play the game to try to guess whether or not there was another person on the other side of the chat box. It reminds me a bit of that. But I can certainly see why placing this technology inside of a chat window makes it so much more accessible and perhaps even more engaging than what we previously had. But the underlying technology, as far as I can see, is still GPT3, and it hasn’t changed necessarily significantly, except for this mode of access.

Rebecca: How long has this tool been learning how to write or gathering content?

Marc: Well, that’s a great question. So it is really just a precursor from GPT3. And again, we don’t really know this because open AI isn’t exactly open, unlike their name. The training data cuts off for this model for ChatGPT about two years ago. And of course, ChatGPT was launched last year at the end of November. So, it’s very recent, pretty up to date with some of that information, too. You can always kind of check the language model and see how much it actually, as we say, knows about the world by what recent events it can accurately describe. It’s really interesting how quickly people have freaked out about this. And Bob’s, I think, building off of that, I think he’s very right that this slight rhetorical change in the user interface to a chat, that suddenly people are able to actually interact with, set off this moral panic in education. You guys know this through the state of New York, New York City schools have now tried to ban it in the actual classroom, which I think is not going to work out very well. But it is certainly the theme we’re seeing not just in K through 12, but also higher ed too… seeing people talk about going back to blue books, going to AI proctoring services, which are just kind of some of the most regressive things you could possibly imagine. And I don’t want to knock people for doing this, because I know that they’re both frightened, and they probably have good reason to be frightened too, because it’s disrupting their practice. It’s also hopefully at the tail end of COVID, which has left us all completely without our capacity to deal with this. But I do want to keep everyone in mind too, and Bob’s really a great resource on this too, from his work with Wikipedia, is that your first impression of a tool, especially if you’re a young person using this and you have someone in authority telling you what a tool is, if you tell them that that tool is there to cheat or it is there to destroy their writing process or a learning process, that is going to be submitted in them for a very long time. And it’s gonna be very hard to dissuade people of that too. So really, what I’ve just tried to do is caution people about the fact that we need to be not so panicked about that. That’s much easier said than done,

Robert: Marc and I started giving a talk on our campus through our Center for Teaching and Learning and our academic innovations group in August. And we’ve just sort of updated it as we’re invited to continue to give the talk. But in it, we offer a couple of different ways for the faculty to think about how this is going to impact their teaching. And one of the things that I offered back in August, at least I think it still holds true, is to think about writing to learn and or writing to report learning. And so writing to learn is going to mean now writing alongside AI tools. And writing to report learning is going to be a lot trickier, depending on what types of questions you ask. So I think it’s going to be a situation where, and I’ve already seen some of this work in the POD community, it’s going to be a situation where writing to report learning has to maybe change gears a bit and think about different types of questions to ask. And the types of questions will be those that are not easily replicated, or answered in a general knowledge sort of way, but they’re going to lean on specific things that you, as instructor, think are going to be valuable in demonstrating learning, but also not necessarily part of a general knowledge base. So, for instance, if you’re a student in my class, and we’ve had lots of discussions about… I don’t know… quantum computing, and in the certain discussion sessions, Marc threw out an idea about quantum computing that was specific. So what I might do on my test is I might cite that as a specific example and remind students that we discussed that in class and then ask them to write a question in response to parts of that class discussion. So that way, I could be touching base with something that’s not generally replicable and easily accessible to AI. But I can also ask a question that’s going to ask my students to demonstrate knowledge about general concepts. And so, if both elements are there, then I probably know that my short answer question is authentically answered by my students. If some are not, then I might have questions. So I think it’s gonna be about tweaking what we’re doing and not abandoning what we’re doing. But it’s really a tough moment right now. Because, as soon as we say one thing about these technologies, well then they iterate and they evolve. It’s just a really competitive landscape for these tool developers. And they’re all trying to figure out a way to develop competitive advantage. And so they have to distinguish themselves from their competitors. And we can’t predict what ways that they will do that. So it’s going to be a while before, I think, this calms down for writing faculty specifically and for higher education faculty generally, because, of course, writing is central to every discipline and what we do, or at least that’s my bias.

Rebecca: So I’m not a writing faculty member. I’m a designer and a new media artist. And to me, it seems like something that could be fun to play with, which is maybe a counter to how some folks might respond to something like this. Are there ways that you can see a tool like this being useful in helping or advancing learning?

Robert: So, we’ve talked about this a bit, I really think that the general shape to the response, in writing classes specifically, is about identifying specific tools for specific writing purposes in specific stages. So if we’re in the invention stage, and we’re engaging a topic and you’re trying to decide what to write about, maybe dialoguing with open AI with some general questions, it’s going to trigger some things that you’re going to think about and follow up on. It could be great. You know, Marc was one of the first people to point out, I think it was Marc said this, folks who have writer’s block, this is a real godsend, or could be. It really helps get the wheels turning. So we could use in invention, we can use it in revision, we can use it to find sources, once we already have our ideas, so identify specific AI iterations for specific purposes inside of a larger project. I think that’s a method that’s going to work and is going to be something that gets toward that goal that we like to say in our AI Task Force on campus here, which is helping students learn to work alongside AI.

Marc: Yeah, that’s definitely how I feel about it too, and to kind of echo what Bob’s saying, there’s a lot more than you could do with a tool than just generate text. And I think that kind of gets lost in this pipe that you see with ChatGPT and everything else. I kind of mentioned before Whisper was another neural network that they launched just quietly back in the end of September start of October of last year, that works with actually uploading speech. It’s multilingual. So you can actually kind of use that almost like a universal translator in some ways. But the thing that’s, like outstanding with it is when you actually use it with the old GPT3 Playground… I say the old GPT playground like it’s not something that’s still useful right now… it uploads the entire transcript of a recording into the actual Playground. So you actually input it into AI. If you think about this from a teaching perspective, especially from students who have to deal with lecture, and want a way to actually organize their notes in some way, shape, or form, they’re able to do that then by just simply issuing a simple command to summarize your notes, to organize it. You can synthesize it with your past notes, even come up with test questions for an essay you need to write or an exam you’re going to have. Now from a teaching perspective, as someone who’s like try to be as student-centric as possible, that’s great, that’s wonderful. I also realized those people who are still wedded to lecture probably going to look at this, like another moral panic. I don’t want my students to have access to this, because it’s not going to help them with their note taking skills. I don’t want them to be falling asleep in my class as if they were staying awake to begin with. So I’m going to ban this technology. So we’re going to see lots of little areas of this pop up throughout education, it’s not just going to be within writing, it’s going to be in all different forms, the different ways… that I’m right there with you using this tool to really help you begin to think about in designing your own thought process, as you’re going through either a writing project, some people using it for art, some people use it for coding, it’s really up to your imagination of how you’d like to do it. The actual area that we’re looking at has a name, I don’t even know it has a name until the developers we’re working with, guys at Fermat. So there’s this article from a German university about beyond generation is what they call the actual form of that. So using your own text as sort of the input to an AI and then getting brainstorming ideas, automatic summaries, using it to get counter arguments to your own version notes. They use it also for images and all different other types of new generations too. So it’s really out there and like I think ChatGPT is just kind of sucking all the air up out of the room because likely so it’s it’s the new thing. It’s what everyone is talking about but so much has gone on, it really has, in this past few months. The entire fall semester I was emailing Bob like two or three times a week and poor Bob was just like “Just stop emailing me. Okay, we understand. I can’t look at this either. We don’t have time.” But it really was just crazy. It really is.

John: What are some other ways that this could be used in helping students become more productive in their writing or in their learning?

Marc: It really is going to be up to whatever the individual instructor and also what the student comes up with this too. If your process is already set in stone, like my process is set in stone as a writer, I think most of us are too as we’ve matured, it’s very difficult to integrate AI into that process. But if you’re young, and you’re just starting out, you’re maturing, that is a very different story. So we’re going to start seeing ways our students are going to be using this within their own writing process, their own creative process, too, that we haven’t really imagined. And I know that’s one of the reasons why this is so anxiety producing, because we say that there is a process, we don’t want to talk about the fact that this new technology can also disrupt that a little bit. I’ll go and segue to Bob, too, because I think he’s talked a little bit about this as well.

Robert: Yeah, one of the things that we’ve come together in our group that Marc’s co-leading is, we’ve come together to say that we want to encourage our students to use the tools, full stop. Now, we want to help them interpret the usage of those tools. So really being above board and transparent about engaging the tools, using our systems of citation, struggling to cope as they are, but just saying at the beginning, use AI generators in my class. I need to know what writing is yours and what writing is not. But, then designing assignments so you encourage limited engagements, which are quickly followed with reflection. So, oh Gosh, who was is Marc, a colleague, that was, I think, was at NC State in the business class where last spring he had students quote, unquote, cheat with AI.

Marc: Paul Fyfe, Yes.

Robert: Yes, thank you. And so he, in so many words, he basically designed the assignment so that students would have AI write their paper and almost uniformly they said, “Please, let me just write my paper, because it’d be a lot simpler. And I would like the writing a lot more.” So that type of engagement is really helpful, I think, because they were able to fully utilize the AI that they could access, and then try a bunch of different purposes with it, a bunch of different applications with it, and then form an opinion about what its strengths and weaknesses were. And then they pretty quickly saw its limitations. So, I mean, to specifically answer your question, John, I do think it can be helpful with a wide range of tasks. Again, invention stage, if I just have an idea, I can pop an idea in there and ask for more information and I’ll get more information. Hopefully, it will be reliable. But sometimes I’ll get a good deal of information and it’ll encourage me to keep writing. There are AI tools that are good about finding sources, there are AI tools that will continue to help you shift voice. So we’ve seen a lot of people do some fun things with shifting voice. Well, I can think of a lot of different types of writing assignments where I might try to insert voice, and people would be invited to think about the impact of voice on the message and on the purpose. And let’s not forget, so one of the things that irks Marc and myself is that a lot of our friends in the computer science world think of writing as a problem to solve. And we don’t think of writing that way. But, as I said to Marc the other day when we were talking about this, if I’m trying to write an email to my boss in a second language, writing is a problem for me to solve. And so Grammarly has proven to us that there are a large number of people in our world who need different levels of literacy in different applications with different purposes and they’re willing to compensate them for some additional expertise. So I had tried to design a course to teach in the fall, we were to engage AI tools, specifically in a composition class, and I had to pull the plug on my own proposal because the tools were evolving too quickly, Marc and Marc’s team solved the riddle because they decided that they could identify the tools on an assignment basis. So it would be a unit within the course. And so when they shrank that timeline, they had a better chance the tools they identified at the beginning of the unit would still be relatively the same by the time they got to the end of the unit. So getting a menu or a suite of different AI tools that you want to explore, explore them with your students, give them spaces to reflect, always make sure that you’re validating whatever is being said if you’re going to use it, and then always cite it. Those are the ground rules that we’re thinking about when we’re engaging the different tools and then, I don’t know, it can be fun.

Marc: You mean writing can be fun? I’ve never heard such things.

Rebecca: It would be incredible. One of the things that I hear you underscoring related to citations, it was making me think about the ways that I have students already using third party materials in a design class, where we use third party materials when we’re writing a research paper, because we are using citations. So we have methods for documenting these things and making it clearer to an audience, what’s ours and what’s not. So it’s not like it’s some brand new kind of thing that we’re trying to do in terms of documenting that or communicating that to someone else. It’s just adapting it a bit, because it’s a slightly different thing that we’re using, or a different third party tool that we’re using or third party material that we’re using, but I have my students write a copyright documentation for things that they’re doing, like, what’s the license for the images that they’re using that don’t require attribution? I go through the whole list, the fonts that they’re using and the license that they’re using for that? So for me, this seems like an obvious next step or a way that that same process of providing that attribution or that documentation would work well in this atmosphere.

Robert: I think the challenge, and Marc and I’ve talked about this before, the challenge is when you shift from a writing support tool to a writing generation tool. So most of us aren’t thinking about documenting spell checker in Microsoft Word, because we don’t see that as content that is original in some way, right? But it definitely affects our writing, nor do we cite smart compose, Google’s sentence completion tool. But how do you know when you’ve gone from smart compose, providing just a correct way to finish your own thought, to smart compose giving you a new thought. And that’s an interesting dilemma. If we can just take a wee nip of schadenfreude, it was interesting to see that the machine learning conference recently had to amend its own paper submission, Marc was pointing this out to me, their own papers submission guidelines to say: “if you use AI tools, you can’t submit.” And then they had to try to distinguish between writing generators and writing assistance. And so that’s just not an easy thing to do. But it’s just going to involve trust between writers and audiences.

Marc: Yeah, I don’t envy the task of any of our disciplinary conventions trying to do this. We could invest some time in doing this with ChatGPT or thinking about this. But then it’s not even clear if ChatGPT is going to be the end of the road here. We’re talking about this as just another version of AI and how he would do that. I’ve seen some people arguing on social media about the fact that a student or anyone who is using an AI should then track down that idea that the AI is spitting out. And I think that’s incredibly futile because it’s trained on the internet, you don’t know how this idea came about. And that’s one of the really big challenges with this type of technology is that it breaks the chain of citations that was used to actually, for lack of a better word, to generate text. I was gonna say to show knowledge, but it can’t really show knowledge, it’s just basically generated an idea, or mimicked an idea. So that really is going to be a huge challenge that we’re going to have to face too and think about. It’s going to be something that will require a lot of dialogue between ourselves, our students. And also thinking about where we want them to use this technology. I think for right now, it’s something that you want to use a language model with your students, or invite them to use it too, tell them to reflect on that process, as Bob mentioned earlier too. There are some tools out there, LEX is one of them, where you could actually track what was being built in your document with the AI, which sort of like glow and be highlighted. So there are going to be some tools on the market that will do this. It is going to be a challenge, though, especially when people start going wild with it, because when you’re working with AI, when it just takes a few seconds to generate a thing and keeping track of that is going to be something that will require a great deal of not only trust with our students, but you really are going to have to sit down and tell them, “Look, you’re gonna have to slow down a little bit, and not let the actual text generations sort of take over your thinking process and your actual writing process.”

Robert: Speaking a little bit of process right now, I’m working on a project with a colleague in computer science. And we’re looking at that ancient technology, Google smart compose. And much to my surprise, I couldn’t find a literature where anyone had really spent time looking at the impact of the suggestions on open-ended writing. I did find some research that had been done on smaller writing. So, for instance, there was a project that asked writers to compose captions for images, but I didn’t see anything longer than that. So that’s what we did in the fall, we got 119 participants, and we asked them to write an open-ended response, an essay essentially, a short essay in response to a common prompt. Half of the writers had Google smart, compose enabled, and half didn’t. And we’re going through the data now to see how the suggestions actually affect writers’ process and product. So we’re looking at the product right now. One of our hypotheses is that the Google smart compose participants will have writing that is more similar, because essentially they will be given similar suggestions about how to complete their sentences. And we expect that in the non-smart compose enabled population we’ll find that there was more lexical and syntactical diversity in those writing products. On the writing process side, we’re creating, as far as I know, new measures to determine whether they accept suggestions, edit suggestions, or reject suggestions. And we all do some of all three of those usually, but the time spent. And so we’re trying to see if there’s correlations between the amount of time spent, and then again, the length of text, the complexity of text, because if you’re editing something else, you’re probably not thinking about your own ideas, and how to bring those forward. But overall, what we’re hoping to suggest, and, again, because we’re not able to really see what’s happening in smart compose, we’re having to operate with it as a black box. What we’re hoping to suggest is that our colleagues in software development start inviting writers into the process of articulating our writing profile. So let’s say, for instance, you might see an iteration in the future of Google smart compose that says, “Hey, I noticed that you’re rejecting absolutely everything we’re sending to you. Do you want to turn this off?” [LAUGHTER]

Rebecca: Yes. [LAUGHTER]

Robert: Or “I noticed that you’re accepting things very quickly. Would you like for us to increase the amplitude and give you more more quickly?” Understanding those types of interactions and preferences can help them build profiles and the profiles can then hopefully make the tools more useful. So, I know that they, of course, do customize suggestions over time. So I know that the tool does grow. I think as John you might have said, you know, how long is it learning to write, well, they learn to write with us. In fact, those are features that Grammarly competes with its competitors on. It’s like our tool will train up or quickly. At any rate, what does it mean to help students learn to work alongside AI? Well, what I believe, when it comes to writing, part of what it’s going to mean, is help them to understand more quickly what the tool is giving them, what they want, and how they can harness the tool to their purposes. And until the tools are somewhat stable and until the writers are invited into the process of understanding the affordances of the tool and the feature sets. That’s just not possible.

John: Where do you see this moral panic as going? Is this something that’s likely to fade in the near future? And we’ve seen similar things in the past. I’ve been around for a while. I remember reactions to calculators and whether they should be used to allow people to take square roots instead of going through that elaborate tedious process. I remember using card catalogs and using printed indexes for journals to try to find things. And the tools that we have available have allowed us to be a lot more productive. Is it likely that we’ll move to a position where people will accept these tools as being useful productivity tools soon? Or is this something different than those past cases?

Marc: Well, I think the panic is definitely set in right now. And I think we’re going to be in for some waves of hype and panic. We’ve already seen it from last year. I think everyone kind of got huge dose of it with ChatGPT. But we were kind of getting the panic and hype mode when we first came across this in May, wondering what this technology was, how would it actually impact our teaching, how would it impact our students too. There’s a lot of talk right now about trying to do AI detection. Most of the software out there is trying to use some form of AI to detect AI. They’re trying to use an older version of GPT called GPT2 that was open source and open release before openAI decided to sort of lock everything down. Sometimes it will pick up AI generated text, sometimes it’ll mislabel it. I obviously don’t want to see a faculty member take a student up on academic dishonesty charges based on a tool that may be correct or may not be correct, based off of that sort of a framework. TurnItIn is working on a process where they’re going to try to capture more data from students that they already have. If they can capture big enough writing samples, they can then use that to compare your version of your work to an AI or someone who’s bought a paper from a paper mill or contract cheating because of course, a student’s writing never changes over the course of their academic career. And our writing never changes either. It’s completely silly. We’ve been sort of conditioned, though, when we see new technologies come along to have it’s sort of equivalent to mitigate its impact on our lives. We have this new thing, it’s disruptive. Alright, well give me the other thing that gets rid of it so I don’t have to deal with it. I don’t think we’re going to have that with this. I’m empathetic to people. I know that that’s a really hard thing for them to hear. Again, I made the joke too about the New York City school districts banning this but, from their perspective, those people are terrified. I don’t blame them. When we deal with higher education, for the most part, students have those skills set that they’re going to be using for the rest of their lives. We’re just explaining them and preparing them to go into professional fields. If this is a situation where you’re talking K through 12, where a student doesn’t have all the reading or grammatical knowledge they need to be successful and they start using AI, that could be a problem. So I think talking to our students is the best way to establish healthy boundaries, and getting them to understand how they want to use this tool for themselves. Students, as Bob mentioned too, and what Paul Fyfe was doing with his actual research, students are setting their own boundaries with this, they’re figuring out that this is not working the way the marketing hype is telling them it is, too. So, we just have to be conscious of that and keep these conversations going.

Robert: Writing with Wikipedia was my panic moment or my cultural panic moment. And my response then was much as the same as it is now. Cool. Let’s check it out. And Yochai Benkler has a quote, and I don’t have it exactly right in front of me, but he says something like all other things being equal, things that are easier to do, are going to be more likely to get done. And the second part, he says is all of the things are never equal. So that was just like the point of Wikipedia, right? Like people really worried about commons based peer production and collaborative knowledge building and inaccuracies and biases, which are there still, creeping their way in and displacing Encyclopedia Britannica and peer-reviewed resources. And they were right, if they were worried because Benkler is right. It’s a lot easier to get your information from Wikipedia and if it’s easier, that’s the way it’s going to come. You can’t do a Google search without pulling up a tile that’s been accessed through Wikipedia. But the good news is is now the phrase about Wikipedia that she’s is that Wikipedia is known as the good grown up of the internet, because the funny thing is that the community seems so fractious and sharp elbowed at first about who was right in producing a Wikipedia page about Battlestar Galactica. Well, so that grew over time, and more and more folks in higher education and more and more experts got involved and the system’s improved, and it’s uneven, but it is still the world’s largest free resource of knowledge. And it’s because it’s free, because it’s open and very accessible, then it enters into our universe of what we know. I think the same thing holds here, right? If it’s as easy to use as it is now, the developers are working on ways to make it easier still. So we’re not going to stop this, we just got to think about ways that we can better understand it and indicate, where we need to, that we’re using it and how we’re using it, for what ends and what purposes. And so your question, John, I think was around or at least you used productivity. So I don’t agree with his essay, and I certainly don’t agree with a lot that he’s done, but Sam Altman, one of the OpenAI co-founders, does have this essay, his basic argument is that in the long run, what AI is doing is reducing the cost of labor. So that will affect every aspect of life, that it’s just a matter of time before AI is applied to every aspect of life. And so then we’re dropping costs for everyone. And his argument is we are therefore improving the lives and living standards of everyone. I’m not there. But I think it’s a really interesting argument to make if you take it that long. Now, as you mentioned earlier about earlier technologies… the calculator moment, for folks in mathematics. My personal preference would be to have someone else’s ox get gored before mine is, but we’re up, so we have to deal with it. And our friends in art, they’re dealing with it as well. It’s just a matter of time before our friends in the music, obviously our friends in motion capture are dealing with it, I think you’re handling it in design as well. So it’s just a matter of time before we all figure it out. So that we have to sort of learn from each other in terms of what our responses were. And I think there’ll be sort of these general trends, we might as well explore these tools, because this is the world where our students will be graduating. And so helping them understand the implications, the ethical usage, the citation system purposes, it’d be great if we had partners on the other side that would telegraph to us a little bit more about what the scope and the purpose and the origins of these tools are. But we don’t have that just yet.

Marc: I agree completely with what Bob said, too.

Rebecca: One of the things that’s been interesting in the arts is the conversation around copyright and what’s being input into the data sets initially, and that that’s often copyright protected material. And then therefore, what’s getting spit out is derivative of that. And so there becomes some interesting conversations around whether or not that’s a fair use whether or not that’s copyright violation, whether or not that’s plagiarism. So I’m curious to hear your thoughts on whether or not these similar concerns are being raised. over ChatGPT or other systems that you’ve been interacting with.

Marc: Writing’s a little bit different, I think there are some pretty intense anti-AI people out there who basically say that this is just a plagiarism generator. I see what they’re saying, but any sort of terminology with plagiarism, it doesn’t really make sense. Because it doesn’t really focus on the fact that it’s stealing from one idea. It’s just using fast and massive chunks of really just data from the internet. And some of that data doesn’t even have a clear source. So it’s not even really clear how that goes back to it. But that is definitely part of the debate. Thank God I’m not a graphic artist, ‘cause I don’t know, I’ve talked to a few friends of mine who are in graphic arts and they’re not dealing with this as well as we are, I can say that, to say the least too. And you can kind of follow along with some of the discourse on social media too. It’s been getting intense. But I do think that we will see some movement within all these fields about how they’re going to treat generative text or generative image, generative code, and all that way. In fact, openAI is being sued now in the coding business too, because they’re copilot product was supposedly capable of reproducing an entire string of code, not just generating, but reproducing it from what it was trained on too. So I think it is an evolving field, and we’re gonna see where our feet land, but for right now, the technology is definitely moving underneath us as we’re talking about all this in terms of both plagiarism and copyright in all the things.. And I’m with Bob, I want to be able to cite these tools and be able to understand it. I also am kind of aware of the fact that if we start bringing in really hardcore citation into this, we don’t want to treat the technology as a person, right? You don’t want to treat the ideas coming from the machine necessarily, we want to treat this as “I use this tool to help me with this process.” And that becomes complicated, too, because then you have to understand the nuance of how that was used and what sort of context it was used in too. So yeah, it’s it’s going to be the wild west for a while.

Robert: I wanted to turn it back on our hosts for a second if I can and ask Rebecca and John a question. So I’ve could remember the title of Sam Altman’s essay, It’s Moore’s Law for everything. That really, I think, encapsulates his point. What do y’all think as people in higher education? Do you think this is unleashing a technology that’s going to make our graduates more productive in meaningful ways? Or is it unleashing a technology that questions what productivity means?

Rebecca: I think it depends on who is using it.

John: …and how it’s being used.

Rebecca: Yeah, the intent behind it… I think it can be used in both ways, it can be used to be a really great tool to support work and things that we’re exploring and doing and also presents challenges. And people are definitely trained to use it to shortcut things in ways that maybe it doesn’t make sense to shortcut or undermines their learning or undermines contributions to our knowledge.

John: And I’d agree pretty much with all of that, that it has a potential for making people more productive in their writing by helping get past writer’s block and other issues. And it gives people a variety of ways of perhaps phrasing something that they can then mix together in a way that better reflects what they’re trying to say. And I think it’s a logical extension of many of those other tools we have, but it is also going to be very disruptive for those people who have very formulaic types of assignments that are very open ended, those are not going to be very meaningful in a world in which we have such tools. But on the other hand, we’re living in a world in which we have such tools, and those tools are not going to go away, and they’re not going to become less powerful over time. And I think we’ll have to see. Whenever there’s a new technology, we have some people who really praise it, because it’s opening up these wonderful possibilities, such as television was going to make education universal in all sorts of wonderful ways and the internet was going to do the same thing. Both have provided some really big benefits. But there’s often costs that are unanticipated, and often benefits that are unanticipated, and we have to try to use them most effectively.

Robert: So one of the things I‘ve appreciated about this conversation it’s that you guys have made me think even more, so I want to follow up on what you’re saying, and maybe articulate my anxiety a little better. So Emad Mostaque, I think is his name, is the developer or the CEO of Stability AI, and he was on Hard Fork. And I listened to the interview and he basically said, “Creativity is too hard and we’re going to make it easy. We’re going to make people poop rainbows.” He did use the phrase poop rainbows [LAUGHTER] but I don’t remember if that was exactly the setup. And so I’m not an art teacher, but I’m screaming at the podcast. No, it’s not just about who can draw the most accurate version of a banana in a bowl, it’s the process of learning to engage the world around you through visual representation, and I’m not an art teacher. So that’s my fear for writing. I guess my question for everybody here is, do you think these tools will serve as a barrier, because they’ll provide a fake substitute for the real thing that we then have to help people get past? Or will that engagement with the fake thing get their wheels turning and help them find that as a stepping stone and a reduction to the deeper engagement with literacy or visual representation.

Rebecca: I think we already have examples that exist, that the scope of what someone might do so that it appears, looks, feels really similar to something someone already created. So templates do that, any sort of common code set that people might use to build a website, for example, they all then have similar layouts and designs, these things already exist.That may work in a particular area. But then there’s also examples in that same space, where people are doing really innovative things. So there is still creativity. In fact, maybe it motivates people to be more creative, because they’re sick of thinking the same thing over and over again. [LAUGHTER]

John: And going back to issues of copyright, that’s a recent historical phenomenon. There was a time when people recognized that all the work that was being done built on earlier work, that artists explicitly copied other artists to become better and to develop their own creativity. And I think this is just a more rapid way of doing much of the same thing, that it’s building on past work. And while we cite people in our studies, those people cited other people who cited other people who learned from lots of people who were never cited, and this is already taking place, it’s just going to be a little bit harder to track the origin of some of the materials.

Marc: Yeah, I completely agree. I also think that one thing that we get caught up in our own sort of disciplinary own sort of world of higher education is that this tool may not be really that disruptive to us, or may not be as beneficial to us as it would be somewhere else in some other sorts of context. You think about the global South, that is lacking resources, a tool like this, that is multilingual, that can actually help under-resourced districts or under-resourced entire countries, in some cases. That could have an immense impact on equity, in ways that we haven’t seen. That said, there’s also going to be these bad actors that are going to be using the technology to really do lots of weird, crazy things. And you can kind of follow along with this live on Twitter, which is what I’ve been doing. And every day, there’s another thing that they’re doing. In fact, one guy today offered anyone who’s going to argue a case before the Supreme Court a million dollars if they put in their Apple Air Pods and let the AI argue the case for them. And my response is, if you ever want the federal government to ban a technology in lightning speed, that is the methodology to go through and do so. But there’s going to be stunts, there’s already stunts. And Annette Vee was writing about GPT4chan, which is a developer that used an old version of GPT2 on 4chan, the horrible toxic message board, and deployed that bot for about three days where it posted 30,000 times. In 2016, we had the election issues with the Russians coming through, now you’re going to have people with chat bots do this. So it can help with education, definitely, I think that we’re kind of small potatoes compared to the way the rest of the world is going to probably be looking at this technology. I hope it’s not in that way, necessarily, I hope that they can kind of get some safety guardrails put in place. But it’s definitely gonna be a wild ride, for sure.

John: Being an economist, one of the things I have to mention in response to that is there a lot of studies that found that a major determinant of the level of economic growth and development in many countries is the degree of ethno-linguistic fractionalization, that the more languages there are and the more separate cultures you have within the society, the harder it is to expand. So tools like this can help break those things down and can unleash a lot of potential growth and improvement in countries where there are some significant barriers to that.

Marc: Absolutely. I just really want to re-emphasize the point that I brought up at the beginning too, especially now in the wake of what Bob said too. I was not introduced to Wikipedia in a way that would be interesting or anything else. I was introduced to this as a college student with a professor saying to me, “This is a bad thing. This is not going to be helpful to you. Do not use this.” Keep that in mind, the power that you have as an educator when you’re talking about this with your students too, that you are informing their decisions about the world too, about what this tool actually is, when you’re introducing talking about this with them, when you’re actually putting the policy in place of yourself of saying “This is banned.” And I just kind of want to make sure that everyone is really kind of thinking about that now with this because we do actually have a lot of power in this. I know we feel completely powerless in some ways. It’s a little odd that the discussions have been about this. But we actually have a lot of power in how we shape the discussion of this, especially with our students.

Robert: Yeah, that’s a great point and I’m glad you raised it. My question is, I wonder, John, as an economist, and also what you think Rebecca as well, do you guys by the Moore’s Law for Everything argument? So 20, 30 years from now, does generative AI increase the standard of living for people globally?

John: Well, I think it goes back to your point that if we make things easier to do, it frees up time to allow us to do other things and to be more creative. So I think there is something to that.

Rebecca: Yeah. And sometimes creativity is the long game. It’s something that you want to do over a period of time and you have to have the time to put into it. I think it’s an interesting argument.

John: I have been waiting for those flying cars for a long time, but at least now we’re getting closer to self-driving cars.

Robert: I was about to say they gave you a driverless car instead. [LAUGHTER]

John: But, you know, a driverless car frees up time where you could do other things during that time, which could be having conversations or could be reading, it could be many things that might be more enjoyable than driving, especially if there’s a lot of traffic congestion.

Rebecca: …or you could take a train, in which case, you’re also not driving, John

John: …and you’re probably not in the US, [LAUGHTER] or at least not in most parts of the US, unfortunately.

Rebecca: Well, we always wrap up by asking what’s next?

Marc: What’s next? Oh, goodness. Well, again, like I said, there are going to be waves of hype and panic, we’re in the “my students are going to cheat phase.” The next wave is when educators actually realize they can use this to actually grade essays, grade writing, and grade tests, that’s going to be the next “Oh, wait” moment that we’re going to have to see too. That will be both on hype and panic too. And to me, it’s going to be the next conversation we need to have. Because we’re gonna have to establish these boundaries, kind of in real time, about what we want to actually do with this. They are talking about GPT4, this is the next version of this. It’s going to be supposedly bigger than ChatGPT and more capable. We know all the hype that you can kind of repeat about this sort of thing too. But 2023 is probably going to be a pretty wild year. I don’t know what’s gonna go beyond that. But I just know that we’re going to be talking about this for the next, at least,12 months for sure.

Robert: I agree with Marc, I think an discipline at least, the next panic or I don’t know, jubilee, will be around automated writing evaluators, which exists and are commercially available. But the big problem is the research area known as explainable AI, which is to me tremendously fascinating, that you can build neural nets that will find answers to how to play Go, that after I don’t know how many hundreds of years or even 1000s of years that humans have played Go, find winning strategies that no one has ever found before, but then not be able to tell you how they were found. That’s the central paradox. I would like to say I hope explainable AI is next. But I think, before we get explainable AI, we’re gonna have a lot more disruptions, a lot more ripples when unexplainable AI is deployed without a lot of context.

John: One of the things I’ve seen popping up in Twitter is with those AI detectors that apparently ChatGPT, if you ask it to rewrite a document so it cannot be detected by the detectors, will rewrite it in a way where it comes back with a really low score. So it could very well be an issue where we’re gonna see some escalation. But that may not be the most productive channel for this type of research or progress.

Rebecca: Sounds like many more conversations of ethics to come. Thank you so much for your time and joining us.

Marc: Well, thank you both.

John: Well, thank you. Everyone has been talking about this and I’m really glad we were able to meet with you and talk about this a bit.

Robert: Yes. Thank you for the invitation. It’s been fun to talk. If there’s any way that we can add to the conversation as you go forward, we’d be happy to be in touch again. So thank you.

John: I’m sure we’ll be in touch.

Marc: The next panic, we’re always available. [LAUGHTER]

John: The day’s not over yet. [LAUGHTER]

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]