274. ChatGPT

Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode, Robert Cummings and Marc Watkins join us to discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.

Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning.

Show Notes


John: Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode we discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.


John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.


John: Our guests today are Robert Cummings and Marc Watkins. Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning. Welcome, Robert and Mark.

Robert: Thank you.

Marc: Thank you.

Rebecca: Today’s teas are:… Marc, are you drinking tea?

Marc: My hands are shaking from caffeine so much caffeine inside of me too. I started off today with some I think it’s Twinings Christmas spice, which is really popular around this house since I got that in my stocking. My wife is upset because I’m in a two bag per cup person. And she’s like saying you got to stop that, so she cuts me off around noon [LAUGHTER] and just to let me just sort of like dry out, for lack of a better word from caffeine withdrawal.

Rebecca: Well, it’s a great flavored tea. I like that one too.

John: It is.

Rebecca: I could see why you would double bag it.

Marc: I do love it.

Rebecca: How about you, Robert?

Robert: I’m drinking an English black tea. A replacement. Normally my tea is Barry’s tea, which is an Irish tea….

Rebecca: Yeah.

Marc: …but I’m out, so I had to go with the Tetley’s English black tea.

Rebecca: Oh, it’s never fun when you get to go to your second string. [LAUGHTER]

John: And I am drinking a ginger peach black tea from the Republic of Tea.

Rebecca: Oh, an old favorite, John.

John: It is.

Rebecca: I’m back to one of my new favorites, the Hunan Jig, which I can’t say with a straight face. [LAUGHTER]

John: We’ve invited you here today to discuss the ChatGPT. We’ve seen lots of tweets, blog posts, and podcasts in which you are both discuss this artificial intelligence writing application. Could you tell us a little bit about this tool, where it came from and what it does?

Marc: I guess I’ll go ahead and start, I am not a computer science person. I’m just a writing faculty member. But we did kind of get a little bit of a heads up about this in May when GPT3, which is the precursor to ChatGPT was made publicly available. It was at a private beta for about a year and a half when it was being developed, and then went to public in May. And I kind of logged in through some friends of mine social media to start checking out and seeing what was going on with it. Bob was really deep into AI with the SouthEast conference. You were at several AI conferences too during the summer as well, Bob. It is a text synthesizer, it’s based off of so much text just scraped from the internet and trained on 175 billion parameters. It’s just sort of shocking to think about the fact that this can now be accessed through your cell phone, if you want to do it on your actual smartphone, or a computer browser. But it is something that’s here. It’s something that functions fairly well, that you make things up sometimes. Sometimes it can be really very thoughtful, though, in it’s actual output. It’s very important to keep in mind, though, that AI is more like a marketing term in this case. There’s no thinking, there’s no reasoning behind it too. It can’t explain any of its choices. We use the term writing when we talk about it, but really what it is, is just text generating. When you think about writing, that’s the whole process of the thinking process and going through, being able to explain your choices and that sort of thing. So it’s a very, very big math engine, with a lot of processing power behind it.

Robert: I completely agree with everything Marc’s saying. I think about it is, and I believe it’s true, Marc, as far as we know, it’s an open AI, but it’s still using GPT3, so it’s really the same tool as Playground. I think it’s really interesting that when openAI shifted from their earlier iteration of this technology, which was Playground and there were some other spin offs from that as well, but that was basically a search format where you got an entry, and you would enter a piece of text and then you would get a response, that when they shifted it to chat, it seemed to really take it to the next level in terms of the attention that it was gathering. And I think it’s rhetorically significant to think about that, because the personalization, perhaps, the idea that you had an individual conversation partner, I think is exceptionally cute. The way that they have the text scroll in ChatGPT so as to make it look like the AI is “thinking” to maybe push this out when it’s immediately available. I think all of that reminds me a little bit of Eliza, which is one of the first sort of AI games that you could play where you play the game to try to guess whether or not there was another person on the other side of the chat box. It reminds me a bit of that. But I can certainly see why placing this technology inside of a chat window makes it so much more accessible and perhaps even more engaging than what we previously had. But the underlying technology, as far as I can see, is still GPT3, and it hasn’t changed necessarily significantly, except for this mode of access.

Rebecca: How long has this tool been learning how to write or gathering content?

Marc: Well, that’s a great question. So it is really just a precursor from GPT3. And again, we don’t really know this because open AI isn’t exactly open, unlike their name. The training data cuts off for this model for ChatGPT about two years ago. And of course, ChatGPT was launched last year at the end of November. So, it’s very recent, pretty up to date with some of that information, too. You can always kind of check the language model and see how much it actually, as we say, knows about the world by what recent events it can accurately describe. It’s really interesting how quickly people have freaked out about this. And Bob’s, I think, building off of that, I think he’s very right that this slight rhetorical change in the user interface to a chat, that suddenly people are able to actually interact with, set off this moral panic in education. You guys know this through the state of New York, New York City schools have now tried to ban it in the actual classroom, which I think is not going to work out very well. But it is certainly the theme we’re seeing not just in K through 12, but also higher ed too… seeing people talk about going back to blue books, going to AI proctoring services, which are just kind of some of the most regressive things you could possibly imagine. And I don’t want to knock people for doing this, because I know that they’re both frightened, and they probably have good reason to be frightened too, because it’s disrupting their practice. It’s also hopefully at the tail end of COVID, which has left us all completely without our capacity to deal with this. But I do want to keep everyone in mind too, and Bob’s really a great resource on this too, from his work with Wikipedia, is that your first impression of a tool, especially if you’re a young person using this and you have someone in authority telling you what a tool is, if you tell them that that tool is there to cheat or it is there to destroy their writing process or a learning process, that is going to be submitted in them for a very long time. And it’s gonna be very hard to dissuade people of that too. So really, what I’ve just tried to do is caution people about the fact that we need to be not so panicked about that. That’s much easier said than done,

Robert: Marc and I started giving a talk on our campus through our Center for Teaching and Learning and our academic innovations group in August. And we’ve just sort of updated it as we’re invited to continue to give the talk. But in it, we offer a couple of different ways for the faculty to think about how this is going to impact their teaching. And one of the things that I offered back in August, at least I think it still holds true, is to think about writing to learn and or writing to report learning. And so writing to learn is going to mean now writing alongside AI tools. And writing to report learning is going to be a lot trickier, depending on what types of questions you ask. So I think it’s going to be a situation where, and I’ve already seen some of this work in the POD community, it’s going to be a situation where writing to report learning has to maybe change gears a bit and think about different types of questions to ask. And the types of questions will be those that are not easily replicated, or answered in a general knowledge sort of way, but they’re going to lean on specific things that you, as instructor, think are going to be valuable in demonstrating learning, but also not necessarily part of a general knowledge base. So, for instance, if you’re a student in my class, and we’ve had lots of discussions about… I don’t know… quantum computing, and in the certain discussion sessions, Marc threw out an idea about quantum computing that was specific. So what I might do on my test is I might cite that as a specific example and remind students that we discussed that in class and then ask them to write a question in response to parts of that class discussion. So that way, I could be touching base with something that’s not generally replicable and easily accessible to AI. But I can also ask a question that’s going to ask my students to demonstrate knowledge about general concepts. And so, if both elements are there, then I probably know that my short answer question is authentically answered by my students. If some are not, then I might have questions. So I think it’s gonna be about tweaking what we’re doing and not abandoning what we’re doing. But it’s really a tough moment right now. Because, as soon as we say one thing about these technologies, well then they iterate and they evolve. It’s just a really competitive landscape for these tool developers. And they’re all trying to figure out a way to develop competitive advantage. And so they have to distinguish themselves from their competitors. And we can’t predict what ways that they will do that. So it’s going to be a while before, I think, this calms down for writing faculty specifically and for higher education faculty generally, because, of course, writing is central to every discipline and what we do, or at least that’s my bias.

Rebecca: So I’m not a writing faculty member. I’m a designer and a new media artist. And to me, it seems like something that could be fun to play with, which is maybe a counter to how some folks might respond to something like this. Are there ways that you can see a tool like this being useful in helping or advancing learning?

Robert: So, we’ve talked about this a bit, I really think that the general shape to the response, in writing classes specifically, is about identifying specific tools for specific writing purposes in specific stages. So if we’re in the invention stage, and we’re engaging a topic and you’re trying to decide what to write about, maybe dialoguing with open AI with some general questions, it’s going to trigger some things that you’re going to think about and follow up on. It could be great. You know, Marc was one of the first people to point out, I think it was Marc said this, folks who have writer’s block, this is a real godsend, or could be. It really helps get the wheels turning. So we could use in invention, we can use it in revision, we can use it to find sources, once we already have our ideas, so identify specific AI iterations for specific purposes inside of a larger project. I think that’s a method that’s going to work and is going to be something that gets toward that goal that we like to say in our AI Task Force on campus here, which is helping students learn to work alongside AI.

Marc: Yeah, that’s definitely how I feel about it too, and to kind of echo what Bob’s saying, there’s a lot more than you could do with a tool than just generate text. And I think that kind of gets lost in this pipe that you see with ChatGPT and everything else. I kind of mentioned before Whisper was another neural network that they launched just quietly back in the end of September start of October of last year, that works with actually uploading speech. It’s multilingual. So you can actually kind of use that almost like a universal translator in some ways. But the thing that’s, like outstanding with it is when you actually use it with the old GPT3 Playground… I say the old GPT playground like it’s not something that’s still useful right now… it uploads the entire transcript of a recording into the actual Playground. So you actually input it into AI. If you think about this from a teaching perspective, especially from students who have to deal with lecture, and want a way to actually organize their notes in some way, shape, or form, they’re able to do that then by just simply issuing a simple command to summarize your notes, to organize it. You can synthesize it with your past notes, even come up with test questions for an essay you need to write or an exam you’re going to have. Now from a teaching perspective, as someone who’s like try to be as student-centric as possible, that’s great, that’s wonderful. I also realized those people who are still wedded to lecture probably going to look at this, like another moral panic. I don’t want my students to have access to this, because it’s not going to help them with their note taking skills. I don’t want them to be falling asleep in my class as if they were staying awake to begin with. So I’m going to ban this technology. So we’re going to see lots of little areas of this pop up throughout education, it’s not just going to be within writing, it’s going to be in all different forms, the different ways… that I’m right there with you using this tool to really help you begin to think about in designing your own thought process, as you’re going through either a writing project, some people using it for art, some people use it for coding, it’s really up to your imagination of how you’d like to do it. The actual area that we’re looking at has a name, I don’t even know it has a name until the developers we’re working with, guys at Fermat. So there’s this article from a German university about beyond generation is what they call the actual form of that. So using your own text as sort of the input to an AI and then getting brainstorming ideas, automatic summaries, using it to get counter arguments to your own version notes. They use it also for images and all different other types of new generations too. So it’s really out there and like I think ChatGPT is just kind of sucking all the air up out of the room because likely so it’s it’s the new thing. It’s what everyone is talking about but so much has gone on, it really has, in this past few months. The entire fall semester I was emailing Bob like two or three times a week and poor Bob was just like “Just stop emailing me. Okay, we understand. I can’t look at this either. We don’t have time.” But it really was just crazy. It really is.

John: What are some other ways that this could be used in helping students become more productive in their writing or in their learning?

Marc: It really is going to be up to whatever the individual instructor and also what the student comes up with this too. If your process is already set in stone, like my process is set in stone as a writer, I think most of us are too as we’ve matured, it’s very difficult to integrate AI into that process. But if you’re young, and you’re just starting out, you’re maturing, that is a very different story. So we’re going to start seeing ways our students are going to be using this within their own writing process, their own creative process, too, that we haven’t really imagined. And I know that’s one of the reasons why this is so anxiety producing, because we say that there is a process, we don’t want to talk about the fact that this new technology can also disrupt that a little bit. I’ll go and segue to Bob, too, because I think he’s talked a little bit about this as well.

Robert: Yeah, one of the things that we’ve come together in our group that Marc’s co-leading is, we’ve come together to say that we want to encourage our students to use the tools, full stop. Now, we want to help them interpret the usage of those tools. So really being above board and transparent about engaging the tools, using our systems of citation, struggling to cope as they are, but just saying at the beginning, use AI generators in my class. I need to know what writing is yours and what writing is not. But, then designing assignments so you encourage limited engagements, which are quickly followed with reflection. So, oh Gosh, who was is Marc, a colleague, that was, I think, was at NC State in the business class where last spring he had students quote, unquote, cheat with AI.

Marc: Paul Fyfe, Yes.

Robert: Yes, thank you. And so he, in so many words, he basically designed the assignment so that students would have AI write their paper and almost uniformly they said, “Please, let me just write my paper, because it’d be a lot simpler. And I would like the writing a lot more.” So that type of engagement is really helpful, I think, because they were able to fully utilize the AI that they could access, and then try a bunch of different purposes with it, a bunch of different applications with it, and then form an opinion about what its strengths and weaknesses were. And then they pretty quickly saw its limitations. So, I mean, to specifically answer your question, John, I do think it can be helpful with a wide range of tasks. Again, invention stage, if I just have an idea, I can pop an idea in there and ask for more information and I’ll get more information. Hopefully, it will be reliable. But sometimes I’ll get a good deal of information and it’ll encourage me to keep writing. There are AI tools that are good about finding sources, there are AI tools that will continue to help you shift voice. So we’ve seen a lot of people do some fun things with shifting voice. Well, I can think of a lot of different types of writing assignments where I might try to insert voice, and people would be invited to think about the impact of voice on the message and on the purpose. And let’s not forget, so one of the things that irks Marc and myself is that a lot of our friends in the computer science world think of writing as a problem to solve. And we don’t think of writing that way. But, as I said to Marc the other day when we were talking about this, if I’m trying to write an email to my boss in a second language, writing is a problem for me to solve. And so Grammarly has proven to us that there are a large number of people in our world who need different levels of literacy in different applications with different purposes and they’re willing to compensate them for some additional expertise. So I had tried to design a course to teach in the fall, we were to engage AI tools, specifically in a composition class, and I had to pull the plug on my own proposal because the tools were evolving too quickly, Marc and Marc’s team solved the riddle because they decided that they could identify the tools on an assignment basis. So it would be a unit within the course. And so when they shrank that timeline, they had a better chance the tools they identified at the beginning of the unit would still be relatively the same by the time they got to the end of the unit. So getting a menu or a suite of different AI tools that you want to explore, explore them with your students, give them spaces to reflect, always make sure that you’re validating whatever is being said if you’re going to use it, and then always cite it. Those are the ground rules that we’re thinking about when we’re engaging the different tools and then, I don’t know, it can be fun.

Marc: You mean writing can be fun? I’ve never heard such things.

Rebecca: It would be incredible. One of the things that I hear you underscoring related to citations, it was making me think about the ways that I have students already using third party materials in a design class, where we use third party materials when we’re writing a research paper, because we are using citations. So we have methods for documenting these things and making it clearer to an audience, what’s ours and what’s not. So it’s not like it’s some brand new kind of thing that we’re trying to do in terms of documenting that or communicating that to someone else. It’s just adapting it a bit, because it’s a slightly different thing that we’re using, or a different third party tool that we’re using or third party material that we’re using, but I have my students write a copyright documentation for things that they’re doing, like, what’s the license for the images that they’re using that don’t require attribution? I go through the whole list, the fonts that they’re using and the license that they’re using for that? So for me, this seems like an obvious next step or a way that that same process of providing that attribution or that documentation would work well in this atmosphere.

Robert: I think the challenge, and Marc and I’ve talked about this before, the challenge is when you shift from a writing support tool to a writing generation tool. So most of us aren’t thinking about documenting spell checker in Microsoft Word, because we don’t see that as content that is original in some way, right? But it definitely affects our writing, nor do we cite smart compose, Google’s sentence completion tool. But how do you know when you’ve gone from smart compose, providing just a correct way to finish your own thought, to smart compose giving you a new thought. And that’s an interesting dilemma. If we can just take a wee nip of schadenfreude, it was interesting to see that the machine learning conference recently had to amend its own paper submission, Marc was pointing this out to me, their own papers submission guidelines to say: “if you use AI tools, you can’t submit.” And then they had to try to distinguish between writing generators and writing assistance. And so that’s just not an easy thing to do. But it’s just going to involve trust between writers and audiences.

Marc: Yeah, I don’t envy the task of any of our disciplinary conventions trying to do this. We could invest some time in doing this with ChatGPT or thinking about this. But then it’s not even clear if ChatGPT is going to be the end of the road here. We’re talking about this as just another version of AI and how he would do that. I’ve seen some people arguing on social media about the fact that a student or anyone who is using an AI should then track down that idea that the AI is spitting out. And I think that’s incredibly futile because it’s trained on the internet, you don’t know how this idea came about. And that’s one of the really big challenges with this type of technology is that it breaks the chain of citations that was used to actually, for lack of a better word, to generate text. I was gonna say to show knowledge, but it can’t really show knowledge, it’s just basically generated an idea, or mimicked an idea. So that really is going to be a huge challenge that we’re going to have to face too and think about. It’s going to be something that will require a lot of dialogue between ourselves, our students. And also thinking about where we want them to use this technology. I think for right now, it’s something that you want to use a language model with your students, or invite them to use it too, tell them to reflect on that process, as Bob mentioned earlier too. There are some tools out there, LEX is one of them, where you could actually track what was being built in your document with the AI, which sort of like glow and be highlighted. So there are going to be some tools on the market that will do this. It is going to be a challenge, though, especially when people start going wild with it, because when you’re working with AI, when it just takes a few seconds to generate a thing and keeping track of that is going to be something that will require a great deal of not only trust with our students, but you really are going to have to sit down and tell them, “Look, you’re gonna have to slow down a little bit, and not let the actual text generations sort of take over your thinking process and your actual writing process.”

Robert: Speaking a little bit of process right now, I’m working on a project with a colleague in computer science. And we’re looking at that ancient technology, Google smart compose. And much to my surprise, I couldn’t find a literature where anyone had really spent time looking at the impact of the suggestions on open-ended writing. I did find some research that had been done on smaller writing. So, for instance, there was a project that asked writers to compose captions for images, but I didn’t see anything longer than that. So that’s what we did in the fall, we got 119 participants, and we asked them to write an open-ended response, an essay essentially, a short essay in response to a common prompt. Half of the writers had Google smart, compose enabled, and half didn’t. And we’re going through the data now to see how the suggestions actually affect writers’ process and product. So we’re looking at the product right now. One of our hypotheses is that the Google smart compose participants will have writing that is more similar, because essentially they will be given similar suggestions about how to complete their sentences. And we expect that in the non-smart compose enabled population we’ll find that there was more lexical and syntactical diversity in those writing products. On the writing process side, we’re creating, as far as I know, new measures to determine whether they accept suggestions, edit suggestions, or reject suggestions. And we all do some of all three of those usually, but the time spent. And so we’re trying to see if there’s correlations between the amount of time spent, and then again, the length of text, the complexity of text, because if you’re editing something else, you’re probably not thinking about your own ideas, and how to bring those forward. But overall, what we’re hoping to suggest, and, again, because we’re not able to really see what’s happening in smart compose, we’re having to operate with it as a black box. What we’re hoping to suggest is that our colleagues in software development start inviting writers into the process of articulating our writing profile. So let’s say, for instance, you might see an iteration in the future of Google smart compose that says, “Hey, I noticed that you’re rejecting absolutely everything we’re sending to you. Do you want to turn this off?” [LAUGHTER]

Rebecca: Yes. [LAUGHTER]

Robert: Or “I noticed that you’re accepting things very quickly. Would you like for us to increase the amplitude and give you more more quickly?” Understanding those types of interactions and preferences can help them build profiles and the profiles can then hopefully make the tools more useful. So, I know that they, of course, do customize suggestions over time. So I know that the tool does grow. I think as John you might have said, you know, how long is it learning to write, well, they learn to write with us. In fact, those are features that Grammarly competes with its competitors on. It’s like our tool will train up or quickly. At any rate, what does it mean to help students learn to work alongside AI? Well, what I believe, when it comes to writing, part of what it’s going to mean, is help them to understand more quickly what the tool is giving them, what they want, and how they can harness the tool to their purposes. And until the tools are somewhat stable and until the writers are invited into the process of understanding the affordances of the tool and the feature sets. That’s just not possible.

John: Where do you see this moral panic as going? Is this something that’s likely to fade in the near future? And we’ve seen similar things in the past. I’ve been around for a while. I remember reactions to calculators and whether they should be used to allow people to take square roots instead of going through that elaborate tedious process. I remember using card catalogs and using printed indexes for journals to try to find things. And the tools that we have available have allowed us to be a lot more productive. Is it likely that we’ll move to a position where people will accept these tools as being useful productivity tools soon? Or is this something different than those past cases?

Marc: Well, I think the panic is definitely set in right now. And I think we’re going to be in for some waves of hype and panic. We’ve already seen it from last year. I think everyone kind of got huge dose of it with ChatGPT. But we were kind of getting the panic and hype mode when we first came across this in May, wondering what this technology was, how would it actually impact our teaching, how would it impact our students too. There’s a lot of talk right now about trying to do AI detection. Most of the software out there is trying to use some form of AI to detect AI. They’re trying to use an older version of GPT called GPT2 that was open source and open release before openAI decided to sort of lock everything down. Sometimes it will pick up AI generated text, sometimes it’ll mislabel it. I obviously don’t want to see a faculty member take a student up on academic dishonesty charges based on a tool that may be correct or may not be correct, based off of that sort of a framework. TurnItIn is working on a process where they’re going to try to capture more data from students that they already have. If they can capture big enough writing samples, they can then use that to compare your version of your work to an AI or someone who’s bought a paper from a paper mill or contract cheating because of course, a student’s writing never changes over the course of their academic career. And our writing never changes either. It’s completely silly. We’ve been sort of conditioned, though, when we see new technologies come along to have it’s sort of equivalent to mitigate its impact on our lives. We have this new thing, it’s disruptive. Alright, well give me the other thing that gets rid of it so I don’t have to deal with it. I don’t think we’re going to have that with this. I’m empathetic to people. I know that that’s a really hard thing for them to hear. Again, I made the joke too about the New York City school districts banning this but, from their perspective, those people are terrified. I don’t blame them. When we deal with higher education, for the most part, students have those skills set that they’re going to be using for the rest of their lives. We’re just explaining them and preparing them to go into professional fields. If this is a situation where you’re talking K through 12, where a student doesn’t have all the reading or grammatical knowledge they need to be successful and they start using AI, that could be a problem. So I think talking to our students is the best way to establish healthy boundaries, and getting them to understand how they want to use this tool for themselves. Students, as Bob mentioned too, and what Paul Fyfe was doing with his actual research, students are setting their own boundaries with this, they’re figuring out that this is not working the way the marketing hype is telling them it is, too. So, we just have to be conscious of that and keep these conversations going.

Robert: Writing with Wikipedia was my panic moment or my cultural panic moment. And my response then was much as the same as it is now. Cool. Let’s check it out. And Yochai Benkler has a quote, and I don’t have it exactly right in front of me, but he says something like all other things being equal, things that are easier to do, are going to be more likely to get done. And the second part, he says is all of the things are never equal. So that was just like the point of Wikipedia, right? Like people really worried about commons based peer production and collaborative knowledge building and inaccuracies and biases, which are there still, creeping their way in and displacing Encyclopedia Britannica and peer-reviewed resources. And they were right, if they were worried because Benkler is right. It’s a lot easier to get your information from Wikipedia and if it’s easier, that’s the way it’s going to come. You can’t do a Google search without pulling up a tile that’s been accessed through Wikipedia. But the good news is is now the phrase about Wikipedia that she’s is that Wikipedia is known as the good grown up of the internet, because the funny thing is that the community seems so fractious and sharp elbowed at first about who was right in producing a Wikipedia page about Battlestar Galactica. Well, so that grew over time, and more and more folks in higher education and more and more experts got involved and the system’s improved, and it’s uneven, but it is still the world’s largest free resource of knowledge. And it’s because it’s free, because it’s open and very accessible, then it enters into our universe of what we know. I think the same thing holds here, right? If it’s as easy to use as it is now, the developers are working on ways to make it easier still. So we’re not going to stop this, we just got to think about ways that we can better understand it and indicate, where we need to, that we’re using it and how we’re using it, for what ends and what purposes. And so your question, John, I think was around or at least you used productivity. So I don’t agree with his essay, and I certainly don’t agree with a lot that he’s done, but Sam Altman, one of the OpenAI co-founders, does have this essay, his basic argument is that in the long run, what AI is doing is reducing the cost of labor. So that will affect every aspect of life, that it’s just a matter of time before AI is applied to every aspect of life. And so then we’re dropping costs for everyone. And his argument is we are therefore improving the lives and living standards of everyone. I’m not there. But I think it’s a really interesting argument to make if you take it that long. Now, as you mentioned earlier about earlier technologies… the calculator moment, for folks in mathematics. My personal preference would be to have someone else’s ox get gored before mine is, but we’re up, so we have to deal with it. And our friends in art, they’re dealing with it as well. It’s just a matter of time before our friends in the music, obviously our friends in motion capture are dealing with it, I think you’re handling it in design as well. So it’s just a matter of time before we all figure it out. So that we have to sort of learn from each other in terms of what our responses were. And I think there’ll be sort of these general trends, we might as well explore these tools, because this is the world where our students will be graduating. And so helping them understand the implications, the ethical usage, the citation system purposes, it’d be great if we had partners on the other side that would telegraph to us a little bit more about what the scope and the purpose and the origins of these tools are. But we don’t have that just yet.

Marc: I agree completely with what Bob said, too.

Rebecca: One of the things that’s been interesting in the arts is the conversation around copyright and what’s being input into the data sets initially, and that that’s often copyright protected material. And then therefore, what’s getting spit out is derivative of that. And so there becomes some interesting conversations around whether or not that’s a fair use whether or not that’s copyright violation, whether or not that’s plagiarism. So I’m curious to hear your thoughts on whether or not these similar concerns are being raised. over ChatGPT or other systems that you’ve been interacting with.

Marc: Writing’s a little bit different, I think there are some pretty intense anti-AI people out there who basically say that this is just a plagiarism generator. I see what they’re saying, but any sort of terminology with plagiarism, it doesn’t really make sense. Because it doesn’t really focus on the fact that it’s stealing from one idea. It’s just using fast and massive chunks of really just data from the internet. And some of that data doesn’t even have a clear source. So it’s not even really clear how that goes back to it. But that is definitely part of the debate. Thank God I’m not a graphic artist, ‘cause I don’t know, I’ve talked to a few friends of mine who are in graphic arts and they’re not dealing with this as well as we are, I can say that, to say the least too. And you can kind of follow along with some of the discourse on social media too. It’s been getting intense. But I do think that we will see some movement within all these fields about how they’re going to treat generative text or generative image, generative code, and all that way. In fact, openAI is being sued now in the coding business too, because they’re copilot product was supposedly capable of reproducing an entire string of code, not just generating, but reproducing it from what it was trained on too. So I think it is an evolving field, and we’re gonna see where our feet land, but for right now, the technology is definitely moving underneath us as we’re talking about all this in terms of both plagiarism and copyright in all the things.. And I’m with Bob, I want to be able to cite these tools and be able to understand it. I also am kind of aware of the fact that if we start bringing in really hardcore citation into this, we don’t want to treat the technology as a person, right? You don’t want to treat the ideas coming from the machine necessarily, we want to treat this as “I use this tool to help me with this process.” And that becomes complicated, too, because then you have to understand the nuance of how that was used and what sort of context it was used in too. So yeah, it’s it’s going to be the wild west for a while.

Robert: I wanted to turn it back on our hosts for a second if I can and ask Rebecca and John a question. So I’ve could remember the title of Sam Altman’s essay, It’s Moore’s Law for everything. That really, I think, encapsulates his point. What do y’all think as people in higher education? Do you think this is unleashing a technology that’s going to make our graduates more productive in meaningful ways? Or is it unleashing a technology that questions what productivity means?

Rebecca: I think it depends on who is using it.

John: …and how it’s being used.

Rebecca: Yeah, the intent behind it… I think it can be used in both ways, it can be used to be a really great tool to support work and things that we’re exploring and doing and also presents challenges. And people are definitely trained to use it to shortcut things in ways that maybe it doesn’t make sense to shortcut or undermines their learning or undermines contributions to our knowledge.

John: And I’d agree pretty much with all of that, that it has a potential for making people more productive in their writing by helping get past writer’s block and other issues. And it gives people a variety of ways of perhaps phrasing something that they can then mix together in a way that better reflects what they’re trying to say. And I think it’s a logical extension of many of those other tools we have, but it is also going to be very disruptive for those people who have very formulaic types of assignments that are very open ended, those are not going to be very meaningful in a world in which we have such tools. But on the other hand, we’re living in a world in which we have such tools, and those tools are not going to go away, and they’re not going to become less powerful over time. And I think we’ll have to see. Whenever there’s a new technology, we have some people who really praise it, because it’s opening up these wonderful possibilities, such as television was going to make education universal in all sorts of wonderful ways and the internet was going to do the same thing. Both have provided some really big benefits. But there’s often costs that are unanticipated, and often benefits that are unanticipated, and we have to try to use them most effectively.

Robert: So one of the things I‘ve appreciated about this conversation it’s that you guys have made me think even more, so I want to follow up on what you’re saying, and maybe articulate my anxiety a little better. So Emad Mostaque, I think is his name, is the developer or the CEO of Stability AI, and he was on Hard Fork. And I listened to the interview and he basically said, “Creativity is too hard and we’re going to make it easy. We’re going to make people poop rainbows.” He did use the phrase poop rainbows [LAUGHTER] but I don’t remember if that was exactly the setup. And so I’m not an art teacher, but I’m screaming at the podcast. No, it’s not just about who can draw the most accurate version of a banana in a bowl, it’s the process of learning to engage the world around you through visual representation, and I’m not an art teacher. So that’s my fear for writing. I guess my question for everybody here is, do you think these tools will serve as a barrier, because they’ll provide a fake substitute for the real thing that we then have to help people get past? Or will that engagement with the fake thing get their wheels turning and help them find that as a stepping stone and a reduction to the deeper engagement with literacy or visual representation.

Rebecca: I think we already have examples that exist, that the scope of what someone might do so that it appears, looks, feels really similar to something someone already created. So templates do that, any sort of common code set that people might use to build a website, for example, they all then have similar layouts and designs, these things already exist.That may work in a particular area. But then there’s also examples in that same space, where people are doing really innovative things. So there is still creativity. In fact, maybe it motivates people to be more creative, because they’re sick of thinking the same thing over and over again. [LAUGHTER]

John: And going back to issues of copyright, that’s a recent historical phenomenon. There was a time when people recognized that all the work that was being done built on earlier work, that artists explicitly copied other artists to become better and to develop their own creativity. And I think this is just a more rapid way of doing much of the same thing, that it’s building on past work. And while we cite people in our studies, those people cited other people who cited other people who learned from lots of people who were never cited, and this is already taking place, it’s just going to be a little bit harder to track the origin of some of the materials.

Marc: Yeah, I completely agree. I also think that one thing that we get caught up in our own sort of disciplinary own sort of world of higher education is that this tool may not be really that disruptive to us, or may not be as beneficial to us as it would be somewhere else in some other sorts of context. You think about the global South, that is lacking resources, a tool like this, that is multilingual, that can actually help under-resourced districts or under-resourced entire countries, in some cases. That could have an immense impact on equity, in ways that we haven’t seen. That said, there’s also going to be these bad actors that are going to be using the technology to really do lots of weird, crazy things. And you can kind of follow along with this live on Twitter, which is what I’ve been doing. And every day, there’s another thing that they’re doing. In fact, one guy today offered anyone who’s going to argue a case before the Supreme Court a million dollars if they put in their Apple Air Pods and let the AI argue the case for them. And my response is, if you ever want the federal government to ban a technology in lightning speed, that is the methodology to go through and do so. But there’s going to be stunts, there’s already stunts. And Annette Vee was writing about GPT4chan, which is a developer that used an old version of GPT2 on 4chan, the horrible toxic message board, and deployed that bot for about three days where it posted 30,000 times. In 2016, we had the election issues with the Russians coming through, now you’re going to have people with chat bots do this. So it can help with education, definitely, I think that we’re kind of small potatoes compared to the way the rest of the world is going to probably be looking at this technology. I hope it’s not in that way, necessarily, I hope that they can kind of get some safety guardrails put in place. But it’s definitely gonna be a wild ride, for sure.

John: Being an economist, one of the things I have to mention in response to that is there a lot of studies that found that a major determinant of the level of economic growth and development in many countries is the degree of ethno-linguistic fractionalization, that the more languages there are and the more separate cultures you have within the society, the harder it is to expand. So tools like this can help break those things down and can unleash a lot of potential growth and improvement in countries where there are some significant barriers to that.

Marc: Absolutely. I just really want to re-emphasize the point that I brought up at the beginning too, especially now in the wake of what Bob said too. I was not introduced to Wikipedia in a way that would be interesting or anything else. I was introduced to this as a college student with a professor saying to me, “This is a bad thing. This is not going to be helpful to you. Do not use this.” Keep that in mind, the power that you have as an educator when you’re talking about this with your students too, that you are informing their decisions about the world too, about what this tool actually is, when you’re introducing talking about this with them, when you’re actually putting the policy in place of yourself of saying “This is banned.” And I just kind of want to make sure that everyone is really kind of thinking about that now with this because we do actually have a lot of power in this. I know we feel completely powerless in some ways. It’s a little odd that the discussions have been about this. But we actually have a lot of power in how we shape the discussion of this, especially with our students.

Robert: Yeah, that’s a great point and I’m glad you raised it. My question is, I wonder, John, as an economist, and also what you think Rebecca as well, do you guys by the Moore’s Law for Everything argument? So 20, 30 years from now, does generative AI increase the standard of living for people globally?

John: Well, I think it goes back to your point that if we make things easier to do, it frees up time to allow us to do other things and to be more creative. So I think there is something to that.

Rebecca: Yeah. And sometimes creativity is the long game. It’s something that you want to do over a period of time and you have to have the time to put into it. I think it’s an interesting argument.

John: I have been waiting for those flying cars for a long time, but at least now we’re getting closer to self-driving cars.

Robert: I was about to say they gave you a driverless car instead. [LAUGHTER]

John: But, you know, a driverless car frees up time where you could do other things during that time, which could be having conversations or could be reading, it could be many things that might be more enjoyable than driving, especially if there’s a lot of traffic congestion.

Rebecca: …or you could take a train, in which case, you’re also not driving, John

John: …and you’re probably not in the US, [LAUGHTER] or at least not in most parts of the US, unfortunately.

Rebecca: Well, we always wrap up by asking what’s next?

Marc: What’s next? Oh, goodness. Well, again, like I said, there are going to be waves of hype and panic, we’re in the “my students are going to cheat phase.” The next wave is when educators actually realize they can use this to actually grade essays, grade writing, and grade tests, that’s going to be the next “Oh, wait” moment that we’re going to have to see too. That will be both on hype and panic too. And to me, it’s going to be the next conversation we need to have. Because we’re gonna have to establish these boundaries, kind of in real time, about what we want to actually do with this. They are talking about GPT4, this is the next version of this. It’s going to be supposedly bigger than ChatGPT and more capable. We know all the hype that you can kind of repeat about this sort of thing too. But 2023 is probably going to be a pretty wild year. I don’t know what’s gonna go beyond that. But I just know that we’re going to be talking about this for the next, at least,12 months for sure.

Robert: I agree with Marc, I think an discipline at least, the next panic or I don’t know, jubilee, will be around automated writing evaluators, which exists and are commercially available. But the big problem is the research area known as explainable AI, which is to me tremendously fascinating, that you can build neural nets that will find answers to how to play Go, that after I don’t know how many hundreds of years or even 1000s of years that humans have played Go, find winning strategies that no one has ever found before, but then not be able to tell you how they were found. That’s the central paradox. I would like to say I hope explainable AI is next. But I think, before we get explainable AI, we’re gonna have a lot more disruptions, a lot more ripples when unexplainable AI is deployed without a lot of context.

John: One of the things I’ve seen popping up in Twitter is with those AI detectors that apparently ChatGPT, if you ask it to rewrite a document so it cannot be detected by the detectors, will rewrite it in a way where it comes back with a really low score. So it could very well be an issue where we’re gonna see some escalation. But that may not be the most productive channel for this type of research or progress.

Rebecca: Sounds like many more conversations of ethics to come. Thank you so much for your time and joining us.

Marc: Well, thank you both.

John: Well, thank you. Everyone has been talking about this and I’m really glad we were able to meet with you and talk about this a bit.

Robert: Yes. Thank you for the invitation. It’s been fun to talk. If there’s any way that we can add to the conversation as you go forward, we’d be happy to be in touch again. So thank you.

John: I’m sure we’ll be in touch.

Marc: The next panic, we’re always available. [LAUGHTER]

John: The day’s not over yet. [LAUGHTER]


John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.