Faculty concerns over student use of AI tools often focus on issues of academic integrity. In this episode, Marc Watkins joins us to discussion how the use of AI tools may have on student skill development. Marc is the Assistant Director for Academic Innovation at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers.
Show Notes
- Marc Watkins’ substack: Rhetorica
- Marc Watkin’s Beyond ChatGPT series;
- No one is Talking About AI’s Impact on Reading – May 3, 2024
- AI’s promise to Pay Attention For You – May 10, 2024
- What Does Automating Feedback Mean for Learning? – May 17, 2024
- Why Are We In a Rush To Replace Teachers With AI? – May 24, 2024
- What’s At Stake When We Automate Research Skills – May 31, 2024
- The Price of Automating Ethics – June 07, 2024
- AI Instructional Design Must Be More Than A Time Saver – June 14, 2024
- Explainpaper
- Google’s NotebookLM
- Perusall
- Hypothesis
- Turbolearn.ai
- MyEssayFeedback
- Project Astra
- OpenAI
- Khan, Salman (2024). Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). Penguin.
- Elicit
- Consensus AI
- ProDream AI
- SynthID
Transcript
John: Faculty concerns over student use of AI tools often focus on issues of academic integrity. In this episode, we explore other impacts that the use of AI tools may have on student skill development.
[MUSIC]
John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.
Rebecca: This podcast series is hosted by
John: , an economist…
John: …and Rebecca Mushtare, a graphic designer…
Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.
[MUSIC]
John: Our guest today is Marc Watkins. Marc is the Assistant Director for Academic Innovation at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers. Welcome back, Marc.
Marc: Thank you guys. I really appreciate it. I think this is my third time joining you all on the pod. This is great.
Rebecca: Today’s teas are:… Marc, are you drinking any tea?
Marc: I am. I’ve gotten really into some cold brew tea and this is cold brew Paris by Harney and Sons. So very good on a hot day.
John: We have some of the non-cold brewed version of that in our office because the Associate Director of the teaching center enjoys that Paris tea so we keep it stocked pretty regularly. It’s a good tea.
Rebecca: Yeah.
John: My tea today is a peppermint spearmint blend.
Rebecca: Sounds nice and refreshing. I have a Brodies Scottish afternoon tea, and it’s hot. And it’s like 95 here. And I’m not really sure why I’m drinking hot tea in this weather. But I am. [LAUGHTER]
John: Well, I am here in North Carolina, and it’s 90 degrees. So it’s much cooler down here in the south, which is kind of nice. [LAUGHTER] And actually, it’s 71 degrees in this room because the air conditioning is functioning nicely.
Rebecca: Yeah, my studio at home… the one room where the air doesn’t work. So hopefully I don’t melt in the next hour.
John: So we’ve invited you here today to discuss your recent Beyond ChatGPT substack series on the impact of generative AI on student learning. Many faculty have expressed concerns about academic integrity issues, but the focus of your posts have been on how student use of AI tools might impact skill development. And your first post in this series discusses the impact of AI on student reading skills. You note that AI tools can quickly summarize readings, and that might cause students to not read as closely as they might otherwise. What are some of the benefits and also the potential harms that may result from student use of this capability?
Marc: When I first really got into exploring generative AI, really before ChatGPT was launched, there were a lot of developers working in this space, and everyone was playing around with openAI’s API access. And so they’re like, ”Hey, what would you like to build? And people would go on to Twitter, which is now X, and Discord and basically say, “I would like this tool and this tool.” And one of the things that came about from that was a reading assistant tool, which was called Explainpaper. And I think I first played around with this in the fall 2022, and then deployed with students in the spring of 2023. And the whole idea that I had with this and that design was to help students really plow through vast amounts of papers and texts, and so students that have hidden disabilities, or announced disabilities with reading and comprehension, and also students that were working on language acquisition, if you’re working in a second or third language, this type of tool can be really helpful. So I was really excited and I deployed this with my students in my class thinking that this is going to help so many students that have disabilities, that will go through a very challenging text, which is why I set it up as, and it did. The students initially reported to you that this was great. And I met with a lot of my students, and one of them said that she’d had dyslexia her whole life and never wanted to talk about it, because it was so hard and this tool for her was a lifesaver. And so that was great. But then the other part of the class basically said, “Hey, I don’t have to read anything at all ever.” And they don’t have any issues, they were just going to offload the close reading skills. And so I had to take a step back and say, “But wait, that’s not what we want this to actually happen. We want you to use this if you get into a pain point in your reading process, and not completely offload that.” So this really became this kind of a discovery on my part that AI can actually do that, it can generate summaries from vast amounts of texts. There are some really interesting tools that are out there right now: Google’s notebook LM, you can actually upload, I think, 4 million words of your own text to it in 10 different documents, and that will summarize and synthesize that material for you. And like the other tools we played around with the Explainpaper, it can change the summary that it’s generating for the actual document to your own reading level. So you could be reading a graduate level research paper, and you’d like it to be read in an eighth grade reading level, it will change the words and the language of that. So yeah, that could have helpful impacts on learning. It could also lead to a lot of de-skilling of those close reading skills we value so much. So that’s really how this started, was kind of coming up here too, and thinking about “Man, this was such a wonderful tool. But oh my gosh, how is this actually being used? And how has this been marketed to students through social media?”
Rebecca: How do you balance some of these benefits and harms?
Marc: By banging my head against the wall and screaming silently into a jar of screams?
Rebecca: I knew it.
Marc: Yeah, the problem with the jar of screams is every time I open it, some of the screams I put in there before escape before the new ones can come in. That’s a great question. So every single one of these use cases we’re gonna talk about today has benefits but also has this vast sort of terror of being offloading the skills that we would associate with them that are crucial for learning. The most important thing to do at this stage is just to make sure the faculty are aware that this can happen and that this is a use case, that’s the first step. Then the next step is building some friction into the learning process that’s already there. So for reading as an example, something that we do usually is assign close reading through annotation, whether that’s a physical pen and paper, or you could use digital annotation tools like Perusall or Hypothesis to help you go through that, that slows down that process if you’re using AI, and really focuses on learning. So when I say friction, it’s not a bad thing, and point of fact too friction… it’s actually sort of crucial for learning. The one challenge we’re faced with most of these tools is that they’re providing or they’re advertising a friction-free experience for students. And we want to say to them, “Well, you may not want to offload these skills entirely, you want to make sure that you do this carefully.” The main thing too I would think about this is I could never ban this tool even if I wanted to, because you don’t have any control over what students use to read outside of the three hours or so that you’d have in class with students a week. And it would be very beneficial for those students. So we can discuss to look forward to that had all those issues to use it. It’s just basically persuading them to use this in a way that’s helpful to them.
John: It reminds me a little bit of some of the discussions years back on the use of things like Cliff’s Notes for books and so forth, except now it’s sort of like a Cliff’s Notes for anything.
Marc: Indeed, Cliff Notes on demand for anything you want, wherever you want it, however you want it, too. And so how we could do that… what I’d said to my students at the time to kind of get them to be shocked of this is that, “You know, what would your reaction be if I used this to read your essays instead of going through and reading all of it and just giving a nice little generative summary” and one of my students said, “Well, you can’t do that. That’s cheating, you’d be fired.” And I had to explain to them, no one even really knows that this exists yet. There’s no rules. There’s no ethical framework. That’s something we’re going to have to come up with together, both faculty and students talking with each other about this.
Rebecca: It seems like the conversations you were having with students about how to maybe strategically use a tool like this, in this particular way, was an important part of harnessing the learning out of the tool, rather than the quote- unquote cheating aspect of the tool.
Marc: Oh, absolutely. Yeah, I mean, the thing we’ve been seeing with every single generative tool that’s been released too, whether it’s for text generation, or for augmenting reading, or doing some of the other use cases, we’ll talk here today, it does take a lot of time and effort on the part of the instructor to basically say, “Look, this is how this tool should be used to help you in this context in our classroom. How you use this outside of the classroom, that’s gonna be on you. But for our intents and purposes here, too, I would like to advocate that you use this tool this way. And here’s the reasons why.” Now asking every educator to do that is just too much of a lift, right? Because most of our folks are just so burnt out with everything else that they have to do. They’re focused on their discipline-specific concerns. They’re not really even on the radar, the fact that this technology exists, let alone how to actually deal with it. Trying to do part of the series is obviously advocating for people to be aware of it. But the next step is going to be building some resources to show how they can use things like annotation and why that matters. And a very quick way for teachers regardless of discipline to start using in their classes.
Rebecca: Your second post in this series examines the effect of AI tools on student notetaking skills. Can you talk a little bit about what might be lost when students rely on AI tools for notetaking and how it might be beneficial for some students as well?
Marc: Yeah, so a lot of the tools are using assisted speech generation software to actually record lecture like we might be using right now on this podcast and a lot of other people are too, and how they’re being marketed to students is just to sort of lean back, take a nap and to have the AI listen to the lecture for you. And some of the tools out there, I think one of them, it’s called Turbolearn.ai, will also synthesize the material, create flashcards for you, create quizzes for you, too. So you don’t have to do that processing part within your mind, which is the key thing. So, notetaking matters. In fact, it can be an art form. I’m not saying that our students treat notetaking like an art form either too, but there are examples of this that is somewhat of an artistic talent, because you as the listener are not just taking down verbatim what’s being said, you’re making these critical choices, these judgments to record what matters and put it in context of what you think you need to know. And that’s an important part of learning something. One thing that I did too as a student when I was in a community college in Missouri as a freshman, I volunteered as a note taker, and back then we did not have assistive technology. I had a pad of paper for myself for my notes and I had a pad of paper that had larger areas to write in for a student who was functionally blind. So I would do two notes at the same time. One in a font that was my size, one was a larger font that he could read with an assistive magnifying glass from one good eye that he had, it was shocking to me that this is what they did. So the first part of the class is do we have anyone who could help take notes? I was like, “Okay, sure I can.” And that’s how that student had notes for him. Obviously having a system like this in place helps those students so much more than having a volunteer notetaker go through this that’s rushing between one set of notes and another too. And using that in an effective way that’s critical, that is thoughtful about how you’re going to engage with it to, is meaningful for their learning versus just hanging back, sitting down letting the AI listen to you for lecture forty.
John: And another mixed aspect of it is the fact that it does create those flashcards and other things that could be used for some retrieval practice. That aspect, I think, could benefit a lot of students. And not all students maintain a very high level of focus and sometimes miss things. So I think there could be some benefits for everyone, as long as they don’t completely lose his skill. And I think maybe by reminding them of that, that could be useful in the same sort of way you talked about reading. But it’s a lot of things to remind students of. [LAUGHTER]
Marc: That’s lot of things to remind them of, too. And keep in mind, it’s a lot of temptation to offload the skills of learning to something that’s going to supposedly promise you to do that skill for you, or do that time-intensive skill for you too. I would love to have this employed in a giant conference somewhere. In fact, I’d love to go into the hallway of a conference and see all these transcripts come together at once in the overhead almost like you’re waiting for a plane flight at your airport, and you’re just seeing the actual material go through there too. That would be exciting for me too, to see what other people are talking about too… maybe I want to pop into this session and see that as well. So I think there’s tons of legitimate use cases for this. It’s just where’s the sort of boundaries we can put in place with this. And that’s true for almost all of this. I was talking to my wife last night, and I said, “When I was growing up, we had a go kart that a few kids in our neighborhood shared and it had a governor on the engine that made sure that the go kart wouldn’t go past 25 miles per hour, because then you’d basically die because it’s a go kart, it’s not really safe.” None of these tools or these technologies have a governor reducing their ability to impact our lives. And that’s really what we need. The thing that’s shocking about all this is that these tools are being released in the public as a grand experiment. And there’s no real use cases about or best practices about how you’re supposed to use this for yourself in your day-to-day life, let alone in education, in your teaching and learning.
Rebecca: I mean, anytime it feels like you can take a shortcut, it’s really tempting, the idea of turbo learning sounds amazing. I would love to learn really quickly. [LAUGHTER] But the reality is that learning doesn’t always happen quickly. [LAUGHTER] Learning happens from mistakes and learning from those mistakes.
Marc: Absolutely. It happens through learning through errors, it happens through learning through friction in many times. We don’t want to remove that friction completely from that learning process.
John: In your third post in the series, you talk about automated feedback and how that may affect both students and faculty. How does the feedback generated from Ai differ from human feedback and what might be some of the consequences of relying on AI feedback?
Marc: Well, so automated feedback is something that generative AI models, especially large language models, are very good at. They take an input based off of the students writing or assessment, and then the instructor can use a prompt that they craft to kind of guide the actual output of that too. So the system I used in the, I think spring of 2023, maybe it’s the fall of 2023 was MyEssayFeedback designed by Eric Kean. And he’s worked with Anna Mills before in the past too to try to make this as teacher friendly, as teacher centric, as possible, because I would get to design the prompts, my students would then be able to get feedback from it. And I use this in conjunction with asynchronous peer review, because it’s an online class. So they got some human feedback, and they got some AI feedback. The thing that was kind of shocking to me was that the students really trusted the AI feedback because it’s very authoritative. It was very quick, and they liked that a lot. And so I did kind of get into the situation where I wanted to talk with them a little bit more critically about that, because some of the things I was seeing behind the scenes is that a lot of the students kept on cueing the system over and over again, they’d get one round of feedback from the tool, they would try to go back and using air quotes right now so your audience can see this “fix” their essay. And my whole point is their writing is not broken. It doesn’t need to be fixed. And generative AI is always going to come up with something for you to work on in your essay. And one student I think went back seven or eight times saying “Is it right now? Is it perfect?” And the AI would always say something new. And she got very frustrated. [LAUGHTER] And I said “I know you’re frustrated, because that’s how the AI is. It’s not smart, even though it sounds authoritative, even though it’s giving you some advice that is useful to you. It doesn’t know you as a writer, it doesn’t understand what you’re actually doing with this piece.” So that crucial piece of AI literacy, knowing that what the limitations are too, is a big one. I think also when you start thinking about how these systems are being sold, in terms of agentic AI, we’re not there yet. None of these systems are fully agentic. That involves both strategic reasoning and long-term planning. When you can see that being put in place with students and their feedback, that can become very, very scary in terms of our labor for faculty to understand that, because there are some examples of some quirky schools, I think it’s the Health Academy in Austin’s one of them that have adopted AI to both teach and provide feedback for students. And I know there’s some other examples too, that talk about the AI feedback being better than human feedback in terms of accuracy. And that is something that we are going to have to contend with. But when I provide feedback for my students, I’m not doing it from an aggregate point of view, I’m not doing it to try to get to the baseline, I want to see my student as a human being and understand who that writer is, and what that means to them. That’s not saying that you can’t have a space for generative feedback, you just want to make sure you do so carefully and engage with it in a way that’s helpful for the students.
John: And might that interfere with student’s development of their own voice in their discipline?
Marc: I think so. And I think the question we don’t have an answer to yet is what happens when our students stop writing for each other or for us and start writing for a bot? What happens when they start writing for a robot? That’s probably going to change their voice and also maybe even some of their ideas and their outlook on the world too, in ways that I’m not all that comfortable with.
Rebecca: It does seem like there’s real benefits to having that kind of feedback, especially for more functional things like grammar and spelling and consistency and that kind of thing. But when you lose your voice, or you lose the fresh ways of saying things or seeing things in the world. [LAUGHTER] you lose the humanity of the world, [LAUGHTER] like it just starts to dissipate. And to me, that’s terrifying.
Marc: It’s terrifying to me too, to say the least. And I think that’s where we go back into trying to find, where’s the line here? Where do we want to draw it? And no one’s doing it for us. We’re having to come up with this largely on our own in real time.
Rebecca: So, speaking of terrifying [LAUGHTER] and lines, you note about how large language models are developing into large multimodal models that simulate voice, vision, expression, and emotion. Yikes. How might these changes affect learning, we’ve already started digging into that.
Marc: Yeah, so this is really about both Google’s demo, which is I think called Project Astra and also openAI’s demo, which is GPT4 omni. Half of the GPT4 omni model is now live for users, you can use the old version of the large language model too for resources, but the other half is live streaming audio and video. And the demo used a voice called Sky that a few people, including Scarlett Johansson, said “that sounds an awful lot like me.” And even the creator of openAI, Sam Altman, basically said that they were trying to go for that 2013 film Her where she started as the chatbot to Joaquin Phoenix. And basically, this is just the craziest thing I can ever think of. If openAI goes through with the promise of this, it will be freely available and rate limited for all users. And you can program the voice to be anything you want, whenever you want. So yes, it’s gonna be gross and creepy, there’s probably going to be people that want to date Sky or whoever it is. But even worse than that, there will probably be people who want to program this to be a political bot. And they only want to learn from a liberal or conservative voice, if they only want a voice that is of their values and their understanding of the world. If they don’t like having a female teacher, maybe they only want a male voice talking to them. Those are some really, really negative downstream effects of this that go back into how siloed we are right now with technology anyway, that you can now basically create your own learning experience or your own experience, and filter the entire world through it. We have no idea what that’s going to do to student learning. Sal Khan thinks that this is going to be a revolution, he wrote about this in Brave New Words. I think that this is going to be the opposite of that. I think it’s going to be more chaotic. I think it’s also going to become, for us as teachers, very difficult to try to police in our classes, because at my understanding of this, this is a gigantic privacy issue. If your students just come up and you’re having a small group discussion or anything else that’s going on too, and one of them activates this new multimodal feature in GPT4 omni and there are voices streaming, they’re talking to the Chatbot and everything else, anything that goes into that is probably going to be part of its training data in some way, shape, or form. Even Google’s demo of this using Project Astra, part of the demo was actually having someone walk around a room in London, they had stopped on a computer screen that was not the actual person’s computer screen and it had some code running for encryption and it read the encryption out loud. It said what it was. So there’s some big time issues that are coming up here too. And it’s all happening in real time. We don’t even have a chance to basically say, “Hey, I don’t really want this,” versus “Oh, this has now been updated. I now have to contend with this live in my own life and in my classes.”
John: Going back to that issue of friction that you mentioned before, Robert Bjork and others have done a lot of work on the issue of desirable difficulties. And it seems like many of these new AI tools that are being marketed to students are designed to eliminate those desirable difficulties. What sort of impacts might that have in terms of student learning and long-term recall of concepts.
Marc: I love desirable difficulties too, and I think that’s a wonderful framing mechanism outside of AI to talk about this too, and why learning really matters. I think the downstream consequences that if this is widely adopted by students, which I think a lot of tech developers want this to happen, and we don’t see this sort of sporadic usage. which we’re seeing right now… to be clear to your audience, not every student is adopting this, not everyone’s using this, most of them are really not aware of it. But if we do see this widespread adoption of this, too, it is going to have a dramatic impact on the skills we associate with reading, the skills that we associate with creating model citizens who are critical thinkers and ready to go into our role to actually participate in them. If we really do get to the situation where they use these tools to offload learning, we’re kind of setting up our students for being uncritical thinkers. And I don’t think that’s a good idea.
Rebecca: Blah. [LAUGHTER] Can you transcribe that, John? [LAUGHTER]
John: I will. I had to do a couple of those. [LAUGHTER]
Marc: Well, blah is always a great version of that. Yeah. [LAUGHTER]
Rebecca: I only have sound effects.
John: One of the transcripts mentioned “horrified sound” as the transcript.[LAUGHTER]
Rebecca: I think that’s basically my entire life. These are the seeds of nightmares, all of them… seeds of giant nightmares.
Marc: Well, I think the thing too, that’s so weird about this is that, yes, and this is kind of getting into the dystopia version of it, but there’s clearly good use cases for these tools, if you can put some limitations on it. And if the developers would just sort of pause and think not just as someone wanting to make money, but as someone who would use this tool to actually learn or be useful to their lives, what areas do they want to design to actually preserve that sort of human judgment, that human sort of friction in learning is going to be meaningful for that going forward?
Rebecca: Yeah, guardrails and ethics would be great.
Marc: Absolutely.
Rebecca: So a number of these tools are also designed to facilitate research. What’s the harm? What harm might there be when we rely on AI research tools more extensively, and get rid of that human judgment piece?
Marc: Yeah, I think one of the tools I used initially was Elicit and Elicit’s probably the most impressive research tool that’s currently available. It is expensive to use, so it’s hard to sort of like practice using it now. It was free initially. Consensus AI, I think is the best ChatGPT plugin that you can use through the custom GPT store. But what Elicit does is it goes through hundreds, if not 1000s, of research papers, and it automates the process of reading those papers for you, synthesizing that material, and giving you an sort of aggregate understanding of the state of knowledge, not just within your research question, but perhaps even in your field of research you’re trying to acquire. So you’re basically offloading the process of research, which for a researcher to do that, takes hundreds upon hundreds of hours of dedicated work, and you’re trusting an algorithm that you can’t audit, you can’t really ask how it came up with its response. So yes, it’s a wonderful tool, when it works and when it gives you an accurate response. Sometimes the responses are not accurate in the least. And if you haven’t read the material too, it’s very difficult to sort of pick up on where the machine is making an error. So yeah, there’s a lot of issues if we just uncritically adopt using this tool, versus if you sort of put some ground rules and ethics about how to use this, to support your research, to support your learning as well. And I think that’s what we want to try to strive for with all of these. And research is just one level of that.
Rebecca: We all have our own individual assumptions that we make when we do things, many of which we’re not aware of. But when we’re relying on tools like this, there’s many more layers of assumptions that we might not be aware of that are built into the software or into the tools or in the ways that it’s doing its analysis or synthesis that I think seems particularly concerning to me.
Marc: Yes, the bias, the sort of hidden biases that we’re not even aware of. And then the developers I don’t think are aware of either, too, is another layer that we can go into and think about this. I say that layer, because this really is like an onion, you peel back the layer, there’s another layer there, another layer, another layer, you’re just trying to get to the point where it’s not so rotten anymore. And it’s very difficult to do because the way that this has been shaped to do is to just accelerate those human tasks as quickly as you can to reduce as much friction as possible, so that you can just sit back and get a response as quickly as you can from this. And in a lot of ways the marketing of this basically describes this as almost like magic. Well, it’s not magic, it’s just prediction and using massive amounts of compute to get you to that point as well, but there are some serious consequences, I think, to our learning if we just uncritically adopt that.
John: Going back a bit, though, to early in my career, I remember the days of card catalogs and indexes where you had to read through a lot of material to find references. And then finding more recent work was almost impossible unless you happen to know of colleagues doing this work at some other institution, or you had access to the working papers of other institutions because of connections. The fact that we have electronic access to these files, and you don’t have to wait a few weeks for one to be mailed to you, or go through interlibrary loan. And that we can do searches and get indexes or get abstracts, at least for these articles, takes us a long way forward. And one other thing is that I do subscribe to Google Alerts in some of my popular papers. And then I occasionally, maybe once every month or so when I see some new ones, I’ll just look at the article and about half the time the person who cites the article gets it wrong, they actually refer to it in a context that’s not entirely relevant. I think in some ways, maybe relying on an AI tool that generates some summaries of the articles before people add them to their bibliography or footnotes, might actually, in some cases, improve the work. Going back again to the early days, one of the things I enjoyed most when I was up there in the periodical sections of the library were the articles around the ones that I was looking for, they’d often lead to some interesting ideas. And that doesn’t come up as much now when you’re using an online search tool, but as you’ve noted all along, we have both benefits and costs to all this. And in this issue, I’m kinda thinking some of the benefits might be worth some of the costs, as long as people follow through and actually read the articles that seem relevant.
Marc: I think that’s the key point too. So long as this leads you to where you want to go. That’s just like what Wikipedia basically is, that’s a great starting point for your research, it just leads you back to the primary sources to actually go in there and read to do it. The challenge that I think we see, and this is what it goes back down to where we go back that onion sort of analogy, is that a lot of the tools that are out there now …I think one of them is called ProDream AI or something like this… will not only find the sources for you, but then it will draft the lit review for you as well. So you don’t have to go through that process of actually reading it. And obviously, that’s where we want to pause and say this isn’t a good idea. But I agree with you completely. John, we are in a digital age, we have been for over 25 years now too. And in fact, when I students is: “This was a terrible experience because I can’t navigate this thing. This is just so horrible for me to do.” And yet every time I’ve done this with the AI research for my students, the interface design is much more easy for them to actually establish and look at sources and go through this and think about it, and part is because the algorithm is now using some of those techniques to actually narrow down their sources too and help them identify them as well. So yeah, there’s definitely benefits to it. It’s not all black and white, for sure.
Rebecca: There’s a lot of gray. [LAUGHTER] I think one of the things that you’re hinting at too is this difference between experts using a tool and novices or someone who’s learning a set of skills. And the way that these tools are designed, an expert is going to be able to use a tool and have a judgment call about whether or not what’s provided is accurate, helpful, relevant, etc. Whereas a novice doesn’t know what they don’t know. And so it becomes really challenging for them to have the information literacy skills that may be necessary to negotiate whether or not this is a path to follow or not. For me, that’s one of the biggest differences when we’re talking about using these tools in a learning context versus using these tools in a professional context are ways to save time to get to the point or get to an end result more swiftly.
Marc: Oh, absolutely. I think that thinking about the audience who’s using it too: a first-year true freshman student, using a tool like this versus a third-year PhD student working on their thesis is a totally different audience, totally different use case. For the most part, the PhD student hopefully has that literacy needed to effectively use these tools already, they might still need some guidance, might need some guardrails and some ethical framing for this too, but it’s a very different situation from that freshman student. I think that’s why most faculty aren’t thinking about how they’re using these tools, because they already have many of those skills already solidified. They don’t need to have a refresher course necessarily on research because they’ve done this now for a large part of their career. For their perspective, adopting these tools is not going to necessarily de-skill them, it might just be necessarily a timesaver in this case.
Rebecca: And what skills we’re offloading to a tool. Some things are just repetitive tasks that take a long time that a tool is great to solve. Just a kind of a waste of time versus really like critical thinking or kind of creative aspects of maybe some of the work we do.
Marc: The tool I want, and I think this exists, I just haven’t found it yet is when I’m trying to write a post and instead of trying to search for the URL to go into the actual title that automatically just finds the URL for me to click on it. I’ll review it for a second, because it takes me so much time finding the URL for the page when I’m doing either a newsletter or I’ve tried to update a website, that would be amazing. Those are some of the things that we could use really easily to cut down on those repetitive tasks, for sure.
John: In your six post in this series, you talk a little bit about issues of ethics. And one thing that I think many students have noted is that many faculty have extremely different policies in terms of when AI is allowed, if it’s allowed, and under what conditions it’s allowed, which creates a lot of uncertainty, and faculty aren’t always very good at conveying that information to students. What should we be doing to help create perhaps a more transparent environment for our students?
Marc: Well, I think transparency is the key word there. We want to, if we’re using these tools for instructional design, be transparent about what we’re using this to, just to model that behavior for our students. So if I develop a lesson plan or use a slide deck that has generated images, I want to clearly identify what part of AI was in that actual creation and talk about why that matters in these situations. What concerns me is that these tools are being turned on left and right for faculty without any sort of guides or best practices about that. I actually asked for Blackboard to have a feature built in with a new AI assistant, so it could identify what was AI generated with a click of a button. There’s no reason why you can’t build something that tracks what was generated by AI within the learning management system. And the response that I’ve gotten to is: “Who basically cares about that?” Well, I kind of care about that, and I care about this for the effects we’re trying to do for our students as well. But yeah, I think adopting a sort of stance of transparency as a clear expectation, both for our own behavior and our students behavior is going to be more meaningful than turning to sort of an opaque AI detector that’s only going to give you a percentage about if this is aggregated content or human content or completely misses the entire situation and misidentifies a human being as AI or vice versa. And that’s something I think we want to focus on as being that human in the loop situation here too. And really not offloading ethics in this casein just trying to teach it. It is hard to do that when the technology is changing rapidly before your very eyes, though. And that’s what this has felt like now for the last two years, I think.
Rebecca: You’re really concerned when faculty lean on an AI detection tool as the only way of identifying something that might be AI generated or an academic integrity violation of some sort. Can you talk a little bit about the effectiveness of these tools, and when they might be useful and when they might not be useful?
Marc: Yeah, to me, they’re not very reliable in an academic context, there’s far too many false positives. And more importantly, too, the faculty that employ them, for the most part, aren’t really trained to actually use them. So some universities have invested in academic misconduct officers, academic honesty officers, or whatever you call them, for offices of academic misconduct, where they actually have people who are trained to both use these tools and provide this to faculty. I might be a so-called expert at AI, again, I’m gonna use air quotes here too, because I’m self taught like everyone else is. But I don’t think I would be comfortable in an academic based conduct investigation, trying to use these tools, which I barely understand how they work, trying to come up with a case for students to do so. The few areas that I’ve looked at that have engaged AI detection, do so as part of a process. And that process is just one part of the AI detector, they have independent advocates usually coming in talking with the students and talking with the faculty member, they don’t go to taking students up on charges at the first step, they often try to look at a restorative process to see if that’s possible. So if the first instance of a student using this technology, they would sit down, and they would be like a third party between the instructor and the student, and talk about if something could be repaired within the relationship. And if the student would acknowledge that an ethical breach actually happened here, not rule breaking, but an ethical breach that has damaged this relationship. And can that relationship be basically restored in some way. So to me, that’s the gold standard of trying to do this, that takes a whole bunch of resources to set up, lots of training, lots of time, versus let’s buy an AI detector for our entire university, turn this on and here’s a little one-page guide about how to use it. And that, to me, has set up this recipe for just chaos in the world too. And it doesn’t matter what detector you’re using. They all have their own issues. And none of them are going to ever give you a complete picture of what’s going on with that student. And I think the big challenge we’re seeing too is that we’re moving well beyond AI detection into some pretty intense surveillance. We’ve got some companies going to stylometry and going through keystroke logging, tracking what was copied and pasted into a document, when it was copied and pasted to. And these are all interesting novel techniques to try to figure out what was written and who wrote it, but they also have some downstream consequences, especially if they don’t involve training. I can imagine certain faculty using that time stamping technique to penalize students by not spending enough time on their writing, whether there is AI in it or not, they’re looking at: “you only spent two hours on this essay that was assigned over two weeks, that’s not showing me all you’ve learned, other students spent 5, 6, 7,12, 14 hours on this. So I think we have to be really careful about what comes online these next few years, and really approach it critically, just like we are asking our students to, so that we don’t look for a solution for this problem that’s based on technology.
John: One of the things you discuss in this essay, though, is the use of digital watermarking, such as the work that Google has been doing with synthID. Could you talk a little bit about how that works, and what your thoughts are about this.
Marc: So watermarking has been sort of on the perpetual horizon in AI for a long time. I think Scott Aaronson, he teaches at the University of Texas, he has been working with open AI for the last two or three years, he has really been very vocal about his own research into watermarking. And supposedly, he has a watermarking system at OpenAI working in the background, they just have not deployed it in public. Google’s synthID is not just for text, it’s for images, it’s for audio, it’s for video. And it’s really designed for what our world is going to very soon look like when you can have an AI that makes the President say anything, do anything, and deal with this vast amounts of misinformation and disinformation, too. And so what synthID is is their actual watermarking technique, and watermarking starts at the source of the generation. So their model was Gemini. And when watermarking comes online, it uses cryptography to put a code into the actual generation, whether that’s a picture, a video, music, or text that can only be deciphered from a key that they actually have. And so watermarking is this really interesting technique that it can be used to try to identify what was made by machine versus a human being. The challenge is, the last time I checked, there’s almost 70 different models on the market now that use multimodal AI or large language models. And those are the only ones I’ve been tracking, I’m sure there’s probably hundreds of others that are small that people have been developing, Google’s synthID model is specific to Google’s products, all the other watermarking schemes will be absolutely specific to OpenAI or Microsoft or Anthropic or any other companies. So it’s going to be the situation where you’re going to use a tool, then you have to rely on the tool to give you a classification if this is accurate or not. And from what I’ve also read, it’s pretty easy to break, because you can feed it into an opposing system’s AI or an open model. And it will simply rewrite it, removing the actual code in that process. So I don’t think watermarking is going to be a long-term solution, I do think it’s a good first step towards something that we can actually do. But it’s just a little bit too chaotic right now in the space. And we would need some massive sort of multinational treaties with different countries who don’t like to talk with us to try to get a sort of unilateral watermarking scheme in place that everyone will agree upon. And then we’d all have to cross our fingers that that key would never be released to the public. Because if that ever happened, that’s when the whole sort of house of cards falls apart.
Rebecca: So that’s kind of a fantasy.
Marc: …kind of a fantasy, but part of this stuff, I think, is marketing based. So like Google wants their products to be both safe and secure. You can’t have that safety and security unless you have some sort of system between there. And that’s what synthID is. I think that it can possibly work for audio, for video, and even for images. I think text is a lot more fungible than anything else, because it’s very easy to start copying and pasting things out there too. It’s also easy to write as yourself as a human being into a document. And that becomes very difficult to sort of gauge what was human versus AI using a watermarking type of program like this.
Rebecca: The final post in your series addresses the use of generative AI tools to design instructional content and activities. Instructors often find the use of AI tools to be very useful for these purposes, even if they ban it for their students. What concerns do you have about relying on AI tools in this context?
Marc: My concern there: “AI for me, not for you. It makes perfect sense to me going forward.” Yeah, obviously we go back into this phase of trying to model ethical behavior using the tools too and understanding why this matters. If you’re going to use a tool to grade or design rubrics, you want to be open about it. You want to be attributing what you use this tool for too, because your students are going to be looking at you and seeing “Well, how are you using this in your job? How am I going to be using this in my job when I graduate from here too?” That’s the actual grounding framework we can do for this for our students and for ourselves. If we can think about that and do that, then we don’t have to necessarily rely on technology as being the sole solution for this, we can start talking about “this is the ethical behavior I’m modeling for you too, this is the ethical behavior I expect from you too. Let’s work together and think about what that means.” Now, that’s not always going to be the solution for this situation, some students are going to listen to that, other students are going to smile at you and go back and happily generate away and try to get past it. But the fact is, we do have that agency in our part too. And that is something I think we should be leaning into right now. Because the connections we’re developing with our students too are, as of this time, still human-to-human base, for the most part. I want to value that and use that to try to persuade them on an ethical pathway.
Rebecca: Modeling our use of technology leads to so many different interesting conversations with students. I know that when I’ve talked about using assistive technology in my classes, having something to read to you if you’re having trouble focusing or using some of these tech tools to solve barriers that you’re facing in getting your work done. And sharing the ways that you use tools to do the same can be really helpful in leading to student success. So I can see how doing the same thing when it’s an AI product is relevant. I know that I used AI to generate a bunch of little case studies for one of my classes and I just told the students that that’s what I did… fed it in a prompt, and I made some tweaks to it, but this is where it came from. And they found it really interesting, and we ended up having a really interesting conversation about when it might be most relevant to use particular tools and when maybe it’s not as wise to use a particular tool, because it isn’t actually helping you in any kind of way. Or it’s defeating the learning, or it’s not really creating a good product in the end.
Marc: That’s a wonderful use case. I mean, sitting down there talking with them and saying how I use this, why use this, let’s get into discussion about this, maybe even a debate about that, too, is part of the learning process. And I’m glad you focused on the fact that about the assistive technologies, I want my students to use this technology if they need to, they don’t need to announce that they have a disability. We need to really be focusing on this fact, for education beyond. At our university, they have to go through a very formalized process to be recognized by the Office of Student Disabilities. It’s very expensive, it’s time consuming, that is out of reach for the vast majority of students, even if they felt comfortable going out there and advocating for themselves that way or if they had parents or other resources to do that. I want to design my classes so that students are aware that these tools exist, that they can use them and that they can be able to trust them to hopefully be able to use this in a way that is effective to their learning too and to trust them for that. That’s what I want. Now if that’s going to happen is another case, indeed. But that’s going to take time. The one thing I will say too, and I think that something that popped up here at a recent story that I read is that professors were moving from a point of despair to anguish with this technology, and I want us to avoid that more than anything else. Because that’s not the sort of stance we need to be taking for ourselves when we deal with this technology with our students. We can navigate this, it’s just going to take a lot of time and a lot of energy. And I hope administrations of various institutions are listening to that too, that they really need to focus on the training aspect of this technology, both for students and for actual teachers. This isn’t just something you flip a switch and turn on and say: “You guys now have AI, go learn how to use it…” that has been a recipe for disaster for it.
Rebecca: It’s definitely a complex topic, because there’s so much hope for equity in some of these tools, especially for students with disabilities. But then there’s also the really scary parts too. [LAUGHTER] So finding that balance, and making sure that both enter conversations when we’re having conversations about AI, I think, is really important. And I appreciate that today we’ve done that, that we’ve talked about some of the scary aspects, but also there’s some real benefits to having these tools available to our students and incorporating them and really having deep and meaningful conversations about them.
Marc: Absolutely. I think that one of the most powerful things I’ve done from the AI Institute is when you can get a skeptic and an early AI adopter at the same table together talking about these things back and forth. You really do see how people come out of their sort of silos and their positions and they can kind of come together and say “Yes, this is an actual use case or two. This is actually meaningful. This is good. How do I make sure that I can put some boundaries on this for my own students and their learning?”
John: So, we always end with a question which is so much on everyone’s mind concerning AI, and that is: “what’s next?”
Marc:Well, what is next indeed? So I think we’re all holding our breath to see if OpenAI is going to fulfill its promise and asking if they’re going to turn on this new multimodal system that lets you talk with it, lets it see you, because they have not done so yet. So we have a little bit of time. But that is going to be on everyone’s mind this fall if they do so. Because having an AI that can listen to you, talk with you, and have a voice that you get to program it, is going to be a new set of challenges that we have not really come up with yet.
John: Well, thank you. This has been fascinating, and your series is wonderful. And I hope that all faculty think about these issues, because a lot of people are focusing on a very narrow range of issues and AI is going to affect many aspects of how we work in higher ed.
Marc: Thank you, John. Thank you, Rebecca. This has been great too. And hopefully I’ll be putting some more resources into that series [LAUGHTER] when I have a chance to do so here.
John: And we will include a link to your substack in the show notes because you’ve got a lot of good information coming out there regularly.
Marc: Thank you.
Rebecca: Well, thanks for joining us. We hope to talk to you again soon.
Marc: I appreciate it. Thank you guys.
[MUSIC]
John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.
Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.
[MUSIC]