311. Upskilling in AI

With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, Marc Watkins joins is to discuss a program that incentivizes faculty development in the AI space. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers.

Show Notes

Transcript

John: With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, we examine a program that incentivizes faculty development in the AI space.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Marc Watkins. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers. Welcome back, Marc.

Marc: Thank you, John. Thank you, Rebecca. It’s great to be back.

Rebecca: We’re glad to have you. Today’s teas are:… Marc, are you drinking tea?

Marc: I am. I have a Cold Brew Hibiscus, which is really great. It’s still very warm down here in Mississippi. So it’s nice to have something that’s a little bit cool. That’d be refreshing.

Rebecca: That sounds yummy. How about you, John?

John: I am drinking a peppermint spearmint tarragon blend today. And it’s not so warm here. In fact, my furnace came on for the first time yesterday.

Rebecca: Yeah, transitions. And, I have English tea time today.

Marc: Well, that’s great.

John: So we have invited you here to discuss your ongoing work related to ChatGPT and other AI tools. Could you first describe what the AI Institute for Teachers is and its origins?

Marc: Sure, I think that when I was last a guest here in January of this year on your show. And it seems like 1000 years ago [LAUGHTER], but during that spring semester, I really took a much deeper dive than the original pilot with a lot of the generative AI tools in the fall. And we started noticing that the pace that big tech was deploying these tools and integrating these with existing software from Microsoft and Google was only accelerating. So in about April or May, I went to my chair, Stephen Monroe, and said, “I think we need to start training some people to get them prepared for the fall,” because we kind of thought that fall was going to be what it is right now, which is a chaotic just sort of mash up of sort of everything you can imagine that some people dive in deeply, some people tried to ban it, some people are trying to do some critical approaches with it too. So we actually worked with the Institute of Data Science here at the University of Mississippi, and we got some money. And we were able to pay 23 faculty members $1,000 apiece to train them for a day and a half about everything we knew about Generative AI, about AI literacy, ethics, what tools were working in the classroom, which wasn’t. And their whole goal was to go back to their home departments over the summer and serve as ambassadors and help prepare them for the fall semester. And we started that, we’ve had funding for one Institute, and now we’re doing workshops, and searching, as we all will, for more funding for doing,

Rebecca: How did faculty respond to (A) the incentive, but (B) also [LAUGHTER] the training that went with it?

Marc: Well, not surprisingly, they responded really well to the incentives, where you can pay people for their time, they generally do show up and do so as well. We had quite a few people wanting to take the training both internally from the University of Mississippi and then people started finding out about it, because I was posting it out on Twitter, and writing about it on my substack. So when we had interest from graduate students in Rome, interest from other SEC schools wanting to attend, and even interest from a community college in Hawaii. Definitely seen a lot of interest within our community, both locally and more broadly, nationally.

Rebecca: Did you find that faculty were already somewhat familiar with AI tools? I had an interesting conversation with some first-year students just the other day, and we were talking about AI and copyright. And I was just asking, “Hey, how many of you have used AI?” And I and another faculty member indicated that we had used AI to make it safe to indicate. And many of them really kind of shook their heads like “No, they hadn’t,” and they were unsure. And then I started pointing to places where we see snippets of it, in email and in texting and other places where there’s auto-finishing of sentences and that kind of thing. And then they’re like, “Oh, yeah, I have seen that. I have engaged with that. I have used that.” What did you find faculty’s knowledge?

Marc: Extremely limited. They thought of AI as ChatGPT. And one of the things we did with the session was basically frame it out as “Look, this was not just going to remain as a single interface anymore.” One of the things that actually happened during the institute that was completely wild to me was the last day, I woke up that morning. And I’d signed up through Google Labs, and you can do it as well, to turn on the features within the Google suite of tools, including in search and Google Docs, and Sheets and everything else. And they gave me access that last day, right before we began. And so I literally just plugged in my laptop and said, “This is what it’s going to look like in Google docs when you have generative AI activate in Google Docs. it pops up and immediately greets you with a wand with a phrase “Help me write.” And what I tried to explain to them and explained to faculty ever since then, is that it makes having a policy against AI very difficult when it shows up at an existing application with no indication whatsoever that this is in fact Generative AI. It’s just another feature that’s in the application that you have grown up with, from many of our students’ perspectives their entire lives. So yeah, we need to really work on training faculty, not just in the actual systems itself, but also getting them outside of that mindset that AI that we’re talking about is just ChatGPT. It’s a lot more than that.

John: Yeah, in general, when we’ve done workshops, we haven’t had a lot of faculty attendance partly because we haven’t paid people to participate [LAUGHTER], but what’s been surprising to me is how few faculty have actually explored the use of AI. My experience with first-year students was a little different than Rebecca, about half of the students in my large intro class had said that they had explored ChatGPT, or some other AI tool. And they seem pretty comfortable with it. But faculty, at least in our local experience, have generally been a bit avoidant of the whole issue. I think they’ve taken the approach that this is something we don’t want to know about, because it may disrupt how we teach in the future. How do you address that issue, and getting faculty to recognize that this is going to be a disruptive technology in terms of how we assess student learning and in terms of how students are going to be demonstrating their learning, and also using these tools for the rest of their lives in some way?

Marc: That’s a great question. We trained 23 people, I’ve also been holding workshops for faculty too, and again, the enthusiasm was a little bit different in those contexts, too. And I agree that faculty, I feel like they feel overwhelmed and maybe some of them want to ignore this and don’t actually want to deal with it, but it is here and it is being integrated at phenomenal rates in everything around us too. But if faculty don’t come to terms with us, and start thinking about engagement with their technology, both for themselves and for their students, then it is going to create incredible disruption that’s going to be lasting, it’s not going to go away. We’re also not going to have things like AI detection, like it is with plagiarism detection to come in and save the day for them too. And those are all things we’ve been trying to very carefully explain to faculty and get them on board. Some of them though, just aren’t there yet, I understand that. I empathize, too. This is a huge amount of time that you spend on these things to think about and talk about as well. And we’re just coming out of the pandemic, people are exhausted, they don’t want to deal with another, quote unquote, crisis, which is another thing that we’re seeing too. So there’s a lot of factors that are at play here that make faculty engagement, less than what I’d like to see.

Rebecca: We had a chairs’ workshop over the summer, and I was somewhat surprised based on our experience with other interactions with faculty, how many chairs had used AI. The number was actually a significant number. And most of them were familiar. And that to me was encouraging [LAUGHTER], it was like, “Okay, good, the leaders of the ship are aware. That’s good, that’s exciting.” But it’s also interesting to me that there are so many folks who are not that familiar, who haven’t experimented, but seem to have really strong policies around AI use or this idea of banning it or wanting to use detectors, and not really being familiar with what they can and cannot do.

Marc: Yeah, that’s very much what we’re seeing across the board too, is that the first detectors that I’m aware of that really came online, I think, for everyone was basically GPTZero, there are a few others that existed beforehand to IBM had one called the Giant Language Testing Lab. But those were all based on GPT-2, you’re going back in time to 2019. I know how ridiculous is it to go back four years in technology terms and think about this… that was a long time ago. And we really started adopting that through education or seem to be adopted in education based off of that panic. The problem is in incidents of education putting a system like that in place, it’s not necessarily very reliable. TurnItIn also adopted their own AI detector as well too. A lot of different universities began to explore and play around with it, I believe, and I don’t want to be misquoted here or misrepresent TurnItIn. I think what they initially came out with it, they were saying there was only 1% false positive rate for detecting AI. They’ve since raised that to 5%. And that has some really deep implications for teaching and learning. Most recently, Vanderbilt Center for Excellence in Teaching and Learning made the decision to not turn on the AI detection feature in TurnItIn. Their reasoning was that they had, I think, in 2022 some 75,000 student papers submitted. If they had the detector on during then that would give them a false positive grade about 3000 papers. And they just can’t deal with that sort of situation through a university level..No one can. You’d have to go through it investigating each one. You would also have to get students a hearing because that is part of the due process. It’s just too much. And that’s one of the main concerns that I have about the tools that it’s just not reliable in education.

John: And it’s not reliable both in terms of false positives and false negatives. So some of us are kind of troubled that we have allowed the Turnitin tool to be active and have urged that our campus shut it down for those very reasons, and I think a number of campuses, Vanderbilt was one of the biggest ones, I think to do that, but I think quite a few campuses are moving in that direction.

Marc: Yes, the University of Pittsburgh also made the decision to turn it off. I think several others did as well, too.

Rebecca: It’s interesting, if we don’t have a tool to measure, a tool to catch if you will, then you can’t really have a strong policy saying you can’t use it at all. [LAUGHTER] There’s no way to follow up on that or take action on that.

Marc: Where we’re at, I think, that for education, that’s a sort of conundrum. We’re trying to explain this to faculty. I think much more broadly, in society, though, if you can’t have a tool that works when you’re talking about Twitter, I’m sorry, X now, and understanding if the material is actually real or fake, that becomes a societal problem, too, and that’s what they’re trying to work on with watermarking. And I believe the big tech companies have agreed to watermark audio outputs, video outputs, and image outputs, but they’ve not agreed to do text outputs, because text is a little bit too fungible, you can go in and you can copy it, you can kind of change it around a little bit too much. So, definitely it’s gonna be a problem, too when state governments start to look at this, and they start wondering that the police officer taking your police report is writing this with their own words, the tax official using this as well, too. So it’s gonna be a problem well outside of education.

Rebecca: And if we’re not really preparing our students for that world in which they will likely be using AI in their professional fields, then we’re not necessarily doing our jobs and education and preparing our society for the future.

Marc: Yeah, I think training is the best way to go forward too and again, going back to the idea of intentional engagement with the technology and giving the students these situations where they can use it and where you, hopefully if you’re a faculty member, you actually have the knowledge and the actual resources to begin to integrate these tools and talk about the ethical use case, understanding what the limitations are and the fact that it is going to hallucinate and make things up, and to think about what sort of parameters you want to put on your own usage too.

John: One of the things that came out within the last week or so, I believe,… we’re recording this in late September… was the introduction of AI tools into Blackboard Ultra. Could you talk a little bit about that?

Marc: Oh boy, yes indeed, they announced last week that the tools were available to us in Blackboard Ultra. They turned it on for us here at the University of Mississippi, and I’ve been playing around with it, and it is a little bit problematic, because for right now, what you can do is with a single click, it will scan your existing materials in your Ultra course and it will create learning modules. It will create quiz questions based off that material, it will create rubrics, and will also generate images. Now compared to what we’ve been dealing with ChatGPT and all these other capabilities, this is almost a little milquetoast by comparison. But it’s also an inflection event for us in education, because it’s now here, it’s directly in our learning management system, it’s going to be something we’re going to have to contend with every single time we open up to create an assignment, or to do an assessment. And I’ve played around with it. It’s an older version of GPT. The image version I think is based on Dall-E, so you would ask for a picture of college students and you get some people with 14 fingers and weird artifacts all over their face, which may not be the one that would actually be helpful for your students. And while the other learning modules there are not my thinking necessarily, it’s just what the algorithm is predicting based off the content that exists in my course. We have that discussion with our faculty, we have them cross that Rubicon on and saying, “Okay, I’m worried about my students using this, what happens to me and my teaching, my labor, if I start adopting these tools. There could be some help, definitely, this could really streamline the process, of course creation and actually making it aligned with the learning outcomes my department wants for this particular class.” But it also gets us in a situation where automation is now part of our teaching. And we really haven’t thought about that. We haven’t really gotten to that sort of conversation yet.

Rebecca: It does certainly raise questions about, obviously, many ethical questions and really about disclosing to students what has been produced by us as instructors and what has been produced by AI and authorship of what’s there. Especially if we’re expecting students to [LAUGHTER] do the same thing.

Marc: It is mind boggling, the cognitive dissonance, with having a policy and saying “No AI in my class,” then all of a sudden, it’s there in my Blackboard course, and I could click on something. And, at least at this integration of Blackboard, they may very well change this, but once you do this, there’s no way to natively indicate that this was generated by AI. You have to manually go in there and say this was created. And I value my relationship with my students, it’s based off of mutual trust. I think almost everyone in education does. If we want our students to act ethically, and use this technology openly, we should expect ourselves to do the same. And if we get into a situation where I’m generating content for my students and then telling [LAUGHTER] them that they can’t do the same with their own essays, it is just going to be kind of a big mess.

John: So given the existence of AI tools, what should we do in terms of assessing student learning? How can we assess the work reasonably given the tools that are available to them?

Rebecca: Do you mean we can just use that auto-generated rubric right, that we just learned about? [LAUGHTER]

Marc: You could, you can use the auto-generated rubric separately from Blackboard. One of the tools I’m piloting right now is the feedback assistant, it was developed by Eric Kean and Anna Mills. I consulted with them on this, too. She’s very big on the AI space for composition. It’s called MyEssayFeedback. And I’ve been piloting this with my students. They know it’s an AI, they understand this. I did get IRB approval to do so. But I’ve just got the second round of generated feedback, and it’s thorough, it’s quick, it’s to the point. And it’s literally making me say, “How am I going to compete with that?” And maybe the way is that maybe I shouldn’t be competing with that, maybe it’s I’m not going to be providing that feedback. But then maybe then I should be providing my time in different ways. Maybe I should be meeting with them one on one to talk about their experiences, maybe that way. But I think you raise an interesting question. I don’t want to be alarmist, I want to be as level-headed as I can. But from my perspective, all the pieces are now there to automate learning to some degree. They haven’t been all hooked up yet and put together a cohesive package. But they’re all there in different areas. And we need to be paying attention to this.Our hackles need to be raised just slightly at this point to see what this can do. Because I think that is where we are headed with integrating these tools into our daily practice.

Rebecca: AI generally has raised questions about intellectual property rights. And if our learning management systems are using our content in ways that we aren’t expecting, how is that violating our rights or the rights that the institution has over the content that’s already there.

Marc: A lot of perspectives of the people that I speak with too, their course content, their syllabi, from their perspective is their own intellectual property in some ways. We get debates about that, about the actual university owns some of the material. But we have had instances where lectures were copyrighted before in the past. And if you’re allowing the system to scan your lecture, you are exposing that to Generative AI. And that gets at one aspect of this. The other aspect, which I think Rebecca is referring to is the issue with training this material for these large language models itself could indicate that it was stolen or not properly sourced from internet and you’re using it and then you’re trying to teach your students [LAUGHTER] to cite material correctly too, so it’s just a gigantic conundrum of just legal and ethical challenges. The one silver lining in all this, and this has been across the board with everyone in my department. This has been wonderful material to talk about with your students, they are actually actively engaged with it, they want to know about this, they want to talk about it. They are shocked and surprised about all the depths that have gone into the training of these models, and the different ethical situations with data and all of it too. And so if you want to just engage your students by talking to them about AI too, that’s a great first step in developing their AI literacy. And it doesn’t matter what you’re teaching, it could be a history course, it could be a course in biology, this tool will have an impact in some way shape or form in your students’ lives they want to talk about, I think maybe something to talk about is there are a lot of tools outside of ChatGPT, and a lot of different interfaces as well, too. I don’t know if I talked about this before in the spring, the one tool that’s really been effective for a lot of students were the reading assistant tools, one that we’ve been employing is called ExplainPaper. They upload a PDF to it, it calls upon generative AI to scan the paper and you can actually select it to whatever reading level you want, then translate that into your reading level. The one problem is that students don’t realize that they might be giving up some close reading, critical reading skills to it as well too, just like we do with any sort of relationship with generative AI. There is kind of that handoff and offloading of that thinking, but for the most part, they have loved that and that’s helped them engage with some really critical art texts that normally would not be at their reading level that I would usually not assign to certain students. So those are helpful. There are plenty of new tools coming out too. One of them is called Claude 2 to be precise by Anthropic. That just came out, I think, in July for public release, it is as powerful as GPT-4. It is free right now, if you want to sign up for it as well too. The reason why I mentioned Claude is that the context window, what you can actually upload to it is so much bigger than ChatGPTs. I believe their context window is 75,000 words. So you can actually upload four or five documents at a time, synthesize those documents. One of the things I was using it for as a use case was that I collected tons of reflections for my students this past year about the use of AI. It’s all in a messy Word document. It’s 51 pages single spaced. It’s all anonymized so there’s new data that identifies them. But it’s so much of a time suck on my time, just go through to code those reflections. And I’ve just been uploading to Claude and having it use a sentiment analysis to point out what reflections are positive from these students, in what way, and it does it within a few seconds. It’s amazing.

John: One other nice thing about Claude is that has a training database that ends in early 2023. So it has much more current information, which actually, in some ways is a little concerning for those faculty who were trying to ask more recent questions, particularly in online asynchronous courses, so that ChatGPT could not address those. But with Claude’s expanded training database, that’s no longer quite the case.

Marc: That’s absolutely correct. And to add to this rather early discussion about AI detection, none of the AI detectors that I’m aware of had time to actually train on Claude, so if you generated essay… and you guys are free to do this on your own, your listeners are too… if you generated and essay with Claude, and you try to upload that to one of the AI detectors, very likely you’re going to get zero detection or a very low detection rate for it too, because it’s again, a different system. It’s new, the existing AI detectors have not had time. So the way to translate this is don’t tell your students about it right now, or in this case, be very careful about how you introduce this technology to your students, which we should do anyway. But this is one of those tools that is massively popular, a lot of people just haven’t known about it because, again, ChatGPT just takes up all the oxygen in the room when we talk about Generative AI

John: What are some activities where we can have students productively use AI to assist their learning or as part of their educational process?

Marc: That’s a great question. We actually started developing very specific activities for them to look at different pain points for writing classes. One of them was getting them to actually integrate the technology that way. So we built a very careful assignment, which called on very specific moves for them to make both in terms of their writing, and their integration of the technology for that. We also looked at bringing some research question, building assignments that way. We have assignments from my Digital Media Studies students right now about how they can use it to create infographics. Using the paid for version of ChatGPT Plus, they can have access to plugins, and those plugins then give them access to Canva and Wikipedia. So they can actually use Canva to create full on presentations based off of their own natural language and use actual real sources by using those two plugins in conjunction with each other. I just make them then go through it, edit it with their own words, their own language too, and reflect on what this has done to their process. So lots of different examples, too, I mean, it really is limited only to your imagination in this time, which is exciting, but it’s also kind of the problem that we’re dealing with, there’s so much to think about.

Rebecca: From your experience in training faculty, what are some getting started moves that faculty can take to get familiar enough to take this step of integrating AI by the spring?

Marc: Well, I think the one thing that they could do is, there are a few really fast courses. I think it’s Ethan Mollick from even from the Wharton School of Business put out a very effective training course that was all through YouTube, I think it’s like four or five videos, very simple to take, to get used to understanding how ChatGPT works, how Microsoft’s Bing works as well too, and what sort of activities students can use it for, what sort of activities faculty could. Microsoft has also put out a very fast course, I think takes 53 minutes to complete about using generative AI technologies in education. And those are all very fast ways of basically coming up to speed with the actual technology.

John: And Coursera has a MOOC through Vanderbilt University, on Prompt Engineering for ChatGPT, which can also help familiarize faculty with the capabilities of at least ChatGPT. We’ll include links to these in the show notes.

Marc: I really, really hope Microsoft, Google and the rest of them calm down, because this has gotten a little bit out of control. And integration of these tools are often without use cases, they’re often waiting to see how we’re going to come up and use them too. And that is concerning. Google has announced that they are committed to releasing their own model that’s going to be in competition with GPT4, I think it’s called Gemini by late November. So it looks like they’re just going to keep on heating up this arms race and you get bigger models, more capable and I think we do need to ask ourselves more broadly what our capacity is just to keep up with this. My capacity is about negative zero at this point… going down further.

John: Yeah, we’re seeing new AI tools coming out almost every week or so now in one form or another. And it is getting difficult to keep up with. I believe Apple is also planning to release an AI product.

Marc: They are. They also have a car they’re planning to release, which is the weirdest thing in the world to me, that there could be your iPhone charged in your Apple Car.

John: GM has announced that they are not going to be supporting either Android or Apple CarPlay for their electric vehicles. So perhaps this is Apple’s way of getting back at them for that. And we always end with the question, what [LAUGHTER] is next, which is perhaps a little redundant, but we do always end with that.

Marc: Yeah, I think what’s next is trying to critically engage the technology and explore it not out of fear, but out of a sense of wonder. I hope we can continue to do that. I do think we are seeing a lot of people starting to dig in. And they’re digging in real deep. So I’m trying to be as empathetic as I can be for those that don’t want to deal with the technology. But it is here and you are going to have to sit down and spend some time with it for sure.

John: One thing I’ve noticed that in working with faculty, they’re very concerned about the impact of AI tools on their students and student work. But they’re really excited about all the possibilities that opens up for them in terms of simplifying their workflows. So that, I think, is a positive sign.

Rebecca: They could channel that to help understand how to work with students.

Marc: I hope they find that out, there’s a positive pathway forward with that too.

John: Well, thank you. It’s great talking to you and you’ve given us lots more to think about.

Marc: Thank you guys so much.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]